26 - 30 April 2026
National Harbor, Maryland, US
Conference 14031 > Paper 14031-14
Paper 14031-14

Remote flash-LiDAR dataset and deep learning benchmarks for long-range object detection and segmentation

28 April 2026 • 1:50 PM - 2:10 PM EDT | National Harbor 5

Abstract

Long-range perception at high frame rates remains a critical challenge for autonomous and defense systems and is underrepresented in open-source resources. We introduce a novel remote flash LiDAR dataset purpose-built for long-range vehicle detection and segmentation using both 2D and 3D algorithms. Data were collected at 20Hz with a narrow 3˚ field-of-view (FOV) flash LiDAR at an outdoor test range, yielding more than 70,000 frames across ranges of 500–1000 feet. The dataset contains eight vehicle classes, including small to medium ground vehicles and a light aircraft, captured across diverse poses and trajectories at five locations positioned at different distances from the sensor. Each frame is provided as an organized 3D point cloud, a fused range and intensity 2D image, and a five-channel per-point array (x, y, z, range, intensity), along with raw sensor measurements. Ground truth annotations include 2D bounding boxes, 3D cuboids, and semantic masks, enabling both 2D and 3D detection and segmentation workflows.

We release a reproducible benchmarking suite with preprocessing, training, and evaluation scripts in an open-source repository. Included are representative modern 2D and 3D object detection and semantic segmentation deep neural networks trained and tested on the dataset. We evaluate performance in terms of detection and classification accuracy across object classes and range bins, as well as throughput/latency for real-time use. Results highlight the feasibility of robust long-range perception on narrow-FOV flash LiDAR and reveal sensitivities to range, target aspect, point density, and SNR.

To facilitate downstream research, we also demonstrate pipelines for vehicle tracking across sequences to improve detection robustness, multiview registration to construct 360˚ vehicle models for simulation and synthetic data generation, and embedded deployment considerations to meet latency requirements. The dataset and code are publicly available to catalyze advances in Automatic Target Recognition (ATR) for long-range, high-speed remote sensing.

Presenter

DCS Corp (United States)
Daniel, is an AFRL contracted scientist harnessing the properties of light for remote identification. Building on 5 years of optical experience, his goal is to continue pushing the limits of optical technologies using machine learning for comprehensive analysis while incorporating the physics of light and the systems to capture them. Alongside his research work he is also an electro-optics master's student at the University of Dayton, Ohio.
Application tracks: AI/ML
Author
Riccardo Consolo
The MathWorks, Inc. (United States)
Presenter/Author
DCS Corp (United States)
Author
Abhijit Bhattacharjee
The MathWorks, Inc. (United States)
Author
Jarrod Brown
Air Force Research Lab. (United States)
Author
Polaris Sensor Technologies, Inc. (United States)