26 - 30 April 2026
National Harbor, Maryland, US
Our conference seeks to grow the Synthetic Data community and foster connections across use cases, methods, and disciplines. By sharing our work in one place, we find common learning opportunities and work to advance the state of the art across the many touchpoints that synthetic data can impact across commercial and defense applications. We aim to facilitate the development of tools and processes for generating Synthetic Data and the dissemination of meaningful, evidence-based guidance for its use.

Conference topics include:
Panel session
Title: Synthetic Data: Adversarial AI, Deception, and Trust

Description: the modern AI space is crowded and adversarial attacks affect every sector with varying degrees of impact. The panel will discuss tactics to fool adversarial AI, harden our own AI against adversarial attacks, and how to decipher the veracity of data presented to our systems.

Facilitator: Dr. Kimberly Manser (C5ISR Center RTI)

Joint sessions
This conference will hold joint sessions:
  1. Partner conference: RADAR Sensor Technology
    Joint session title: Synthetic Data for RADAR Sensor AI/ML
    Please submit papers that touch on synthetic data generation for RADAR-based AiTR, Computer Vision, or other AI/ML applications, including data simulation, virtual assets, and tools.
  2. Partner conference: Algorithms, Technologies, and Applications of Multispectral and Hyperspectral Imaging
    Joint session title: Synthetic Data for Multispectral and Hyperspectral Imaging
    Please submit papers that touch on synthetic data generation for multispectral or hyperspectral imagery AI/ML applications, including virtual assets and components, simulation tools, and use case evaluation.
  3. Partner conference: Infrared Technologies and Applications and Automatic Target Recognition
    Joint session title: Synthetic Data for Infrared Automatic Target Recognition
    Please submit papers that touch on synthetic infrared (IR) image simulation or generation for AI and deep learning, ATR-based applications. This includes virtual assets, tools, and use case evaluation of synthetic data for IR applications and A(i)TR.
Awards
We are pleased to announce two awards for this conference:
  1. Best Oral Presentation Award: presentations will be evaluated in terms of scientific content, audience accessibility, and quality of visual aids
  2. Best Poster Award: posters will be evaluated in terms of scientific content, audience accessibility, and quality of visual aids.
Award winners will receive an awards certificate.
All accepted presentations in this conference are automatically eligible.
;
In progress – view active session
Conference 14029

Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications IV

27 - 29 April 2026 | National Harbor 7
View Session ∨
  • Opening Remarks
  • 1: Synthetic Data Generation Tools I
  • 2: Synthetic Data Generation Tools II
  • 3: Data Engineering and Curation
  • Symposium Plenary
  • Symposium Panel on Counter Unmanned Systems: Challenges, Opportunities, and Crossover Technology
  • Opening Remarks
  • 4: Keynote Session
  • 5: Session of Interest on Artificial Intelligence: Joint Session with Conferences 14029 and 14031
  • Opening Remarks
  • 6: Generative AI
  • 7: Integrated Machine Learning and Synthesis Pipelines
Opening Remarks
27 April 2026 • 8:50 AM - 9:00 AM EDT | National Harbor 7
Session Chair: Kimberly E. Manser, DEVCOM C5ISR (United States)
Opening remarks for Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications IV.
Session 1: Synthetic Data Generation Tools I
27 April 2026 • 9:00 AM - 10:00 AM EDT | National Harbor 7
Session Chair: Raghuveer M. Rao, DEVCOM Army Research Lab. (United States)
14029-1
Author(s): Jarin Ritu, Texas A&M Univ. (United States); Alexandra Van Dine, Massachusetts Institute of Technology (United States); Joshua Peeples, Texas A&M Univ. (United States)
27 April 2026 • 9:00 AM - 9:20 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
Passive sonar signals contain complex characteristics often arising from environmental noise, vessel machinery, and propagation effects. While convolutional neural networks (CNNs) perform well on passive sonar classification tasks, they can struggle with statistical variations that occur in the data. To investigate this limitation, synthetic underwater acoustic datasets are generated that centered on amplitude and period variations. Two metrics are proposed to quantify and validate these characteristics in the context of statistical and structural texture for passive sonar. These measures are applied to real-world passive sonar datasets to assess texture information in the signals and correlate the performances of the models. Results show that CNNs underperform on statistically textured signals, but incorporating explicit statistical texture modeling yields consistent improvements. These findings highlight the importance of quantifying texture information for passive sonar classification.
14029-2
Author(s): Cove J. Kramer, Cameron J. Radosevich, Jason Adams, Jeremy B. Wright, Sandia National Labs. (United States)
27 April 2026 • 9:20 AM - 9:40 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
Large representative datasets are essential for developing and testing tracking and detection algorithms. However, acquiring real-world data across a desired target-feature trade space can be challenging. Synthetic data generation can address these challenges if it provides radiometric fidelity, flexibility in sensor, target, and scene attributes, realistic dynamic conditions, and relevant noise sources. This paper presents a toolkit for generating synthetic high-speed video that meets these criteria. We describe our approach through a case study simulating an infrared sensor in geostationary orbit observing dynamic targets against an Earth background. The toolkit's capabilities are assessed by comparing simulation results with first-principles radiometric calculations and validating details such as target motion, signal-to-noise ratio (SNR), and scene clutter. Our toolkit fulfills a community need by generating realistic high-speed video, enabling rapid development and testing of algorithms while reducing costs and lead times associated with acquiring real-world data.
14029-4
Author(s): Branndon Jones, Kane Miller, Amir Shirkhodaie, Tennessee State Univ. (United States); Eric Grigorian, Josh Gaston, Georgia Tech Research Institute (United States)
27 April 2026 • 9:40 AM - 10:00 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
Advancing autonomous perception for defense applications requires AI models capable of reliable object detection in complex, dynamic battlefield environments. Achieving this performance depends on diverse, high-quality datasets that reflect real-world sensing conditions, which are often impractical to collect physically. This paper introduces a physics-based virtual simulation framework for generating synthetic, multimodal datasets tailored to battlefield scenarios. The framework combines high-fidelity 3D CAD models with configurable virtual sensors to simulate optical, LiDAR, and Synthetic Aperture Radar (SAR) modalities under varied operational conditions. It supports controlled variation of sensor range, geometry, and illumination to ensure balanced, comparable datasets across modalities. An integrated annotation module provides automated polygon labeling and adaptive sampling to enhance class balance and reduce redundancy. Synthetic LiDAR includes elevation, reflectivity, and noise modeling to emulate realistic behavior. The resulting datasets capture diverse tactical scenes and enable scalable, reproducible training and evaluation of AI-based object detection systems.
Break
Coffee Break 10:00 AM - 10:30 AM
Session 2: Synthetic Data Generation Tools II
27 April 2026 • 10:30 AM - 11:30 AM EDT | National Harbor 7
Session Chair: James Uplinger, DEVCOM Army Research Lab. (United States)
14029-5
Author(s): Jeffrey Kerley, Derek T. Anderson, Brendan Alvey, Univ. of Missouri (United States); Skylar Perry, Univ of Missouri (United States); Brendan Young, Univ. of Missouri (United States)
27 April 2026 • 10:30 AM - 10:50 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
Simulation environments for AI demand structured, error resilient generation pipelines. In our previous work, we introduced LSCENE and LCAP, two formal languages designed to describe virtual worlds and data collection processes for AI/ML. In this article, we extend that work by introducing role based agents that operate in parallel. To mitigate agentic issues, each agent is grounded in an independently typed vocabulary and is equipped with reflection access to Unreal Engine methods and assets. Overall, the division of labor when constructing a scene helps reduce each agent's individual context, task complexity, and time spent working. To evaluate the effectiveness of the proposed approach, we conducted a series of experiments. The first demonstrates the benefits of incorporating reflection. The second examines agent design considerations, including the importance and nuances of task specification, role, and structured output. The final experiment demonstrates the potential of our framework for constructing complex datasets to train and evaluate computer vision algorithms for drones.
14029-6
Author(s): Kane Miller, Branndon Jones, Amir Shirkhodaie, Tennessee State Univ. (United States); Greg Furlich, Univ. of Colorado Boulder (United States); Jarhym Christopher, Varlin Sheffey, U.S. Space Force (United States)
27 April 2026 • 10:50 AM - 11:10 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
Identifying, classifying, and monitoring space launch vehicles is vital to Space Domain Awareness (SDA) and supports defensive decisions that protect critical assets in congested and contested orbital environments. This work introduces a physics-driven methodology for generating multimodal synthetic datasets that integrate vehicle dynamics, material properties, environmental conditions, and correlated EO/IR, LiDAR, and SAR signatures. The resulting observables capture formations, spatial extent, and geolocation characteristics across diverse sensor viewpoints, providing the variability needed to train AI systems. By simulating a wide range of operational scenarios and vehicle configurations, the dataset enables ML models to generalize beyond empirical baselines and adapt to dynamic, non-peacetime conditions. Applicable to ground-based and aerial sensing platforms, the approach supports predictive assessment of object behavior, system readiness, and operational performance. For defense applications, it improves mission planning, sensor tasking, and object custody while revealing observational gaps and informing architecture design to enhance resilience in modern space operations.
14029-7
Author(s): Mark D. Klein, ThermoAnalytics Inc (United States); Corey D. Packard, Logan Canull, Jacob W. Early, Audrey C. Levanen, ThermoAnalytics, Inc. (United States)
27 April 2026 • 11:10 AM - 11:30 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
Deep learning has proven effective for detecting and identifying embedded targets in image-based scenes, but high performance requires a large, diverse dataset for robust algorithm training. Obtaining sufficient measured data of adversarial assets can be particularly challenging in the thermal infrared wavebands, but an accurate synthetic image-generation methodology can supplement available measured imagery. In this study, synthetic images of UAVs are generated using physics-based MuSES infrared simulations for a variety of weather conditions, times of day, sensor and target positions. This automated process for generating comprehensive datasets can incorporate motion blur of spinning propeller blades and insert synthetic targets into measured background scenes. The performance of YOLO11 and Faster-RCNN algorithms are evaluated using sky backgrounds for various target resolutions to assess effectiveness in detecting and recognizing UAVs in the thermal infrared spectrum. Detection and identification accuracy are examined for measured and synthetic test images for several sensor-target ranges.
Break
Lunch Break 11:30 AM - 1:30 PM
Session 3: Data Engineering and Curation
27 April 2026 • 1:30 PM - 2:30 PM EDT | National Harbor 7
Session Chair: Christopher L. Howell, DEVCOM C5ISR (United States)
14029-8
Author(s): Yasha Saxena, Duke Univ. (United States), Covar, LLC (United States); Nathan Varberg, Kenneth Morton, Covar, LLC (United States)
27 April 2026 • 1:30 PM - 1:50 PM EDT | National Harbor 7
Show Abstract + Hide Abstract
Computer vision techniques for articulating 3D objects are dependent on learning from large amounts of training data that characterize the full range of object types, appearance, pose relative to the camera, and deformations. To overcome this dependence on exhaustive 2D training sets, we propose a zero-shot 4D template matching approach that learns instantaneous 3D structure from static images and evaluates the likelihood of a temporal deformation model. We fit Hidden Markov Models to a model library of synthetic, 3D animations of aerial objects (bird, drone, parachute, helicopter, flag), and compute the likelihood that a given 4D input fits the process. Using this approach, we demonstrate reliable classification across two domains: synthetic 4D inputs outside the model library and 4D inputs reconstructed from real RGB-D video sequences. Overall, this work establishes an unsupervised, zero-shot approach to non-rigid, aerial object detection that is both effective in practice and inherently extensible, scaling seamlessly to new object types through the simple addition of synthetic animations to the model library.
14029-9
Author(s): Sakura Swain, Huntington Ingalls Industries, Inc. (United States); Robert Nguyen, Booz Allen Hamilton Inc. (United States); Derek Schesser, Christopher D. Meyer, DEVCOM Army Research Lab. (United States); Bryan I. Vogel, Booz Allen Hamilton Inc. (United States); John Vines, DEVCOM Army Research Lab. (United States); Lily Weng, Univ. of California, San Diego (United States); Heesung Kwon, James Uplinger, DEVCOM Army Research Lab. (United States)
27 April 2026 • 1:50 PM - 2:10 PM EDT | National Harbor 7
Show Abstract + Hide Abstract
The use of synthetic imagery to train AI/ML algorithms continues to play a significant role in supplementing datasets to improve algorithm performance. AI/ML algorithms trained with synthetic data suffer from the synthetic-to-real domain gap, in which performance of algorithms trained solely with synthetic data do not perform as well as algorithms trained with real data. This work explores the use of network dissection to identify concept differences between synthetic and real imagery to inform modifications of synthetic imagery to reduce the domain gap.
14029-10
Author(s): Dylan Nicolini, University of Hartford (United States); Ying Yu, Univ. of Hartford (United States)
27 April 2026 • 2:10 PM - 2:30 PM EDT | National Harbor 7
Show Abstract + Hide Abstract
Credit card fraud exceeded $10 billion in U.S. losses in 2023, yet fewer than 5% of transactions carry verified labels. The Gated Temporal Attention Network (GTAN, Xiang et al. AAAI-23) addresses class imbalance via semi-supervised label propagation over temporal transaction graphs. Three sampling strategies are evaluated on IEEE-CIS, benchmarked against S-FFSD: stride-based downsampling, front-loaded fraud concentration, and pre-context window sampling. Data partitioning uses a standard stratified split; temporal ordering is enforced separately during graph construction. Threshold sweeps show sampling strategy determines score calibration: stride-based downsampling sustains high precision across a wide threshold range, while front-loaded and pre-context strategies yield lower ceilings. Pre-context window sampling inflects at a lower fraud concentration than front-loading, supporting the hypothesis that authentic temporal predecessors strengthen risk propagation. Metrics are fold-order dependent due to GTAN's single-checkpoint evaluation, requiring multi-seed averaging for robustness. Zero-shot cross-dataset transfer yields highly unstable recall due to embedding reinitialization.
Symposium Plenary
27 April 2026 • 5:30 PM - 7:00 PM EDT | Potomac A

View Full Details: spie.org/ds/symposium-plenary

Chair welcome and introduction
27 April 2026 • 5:30 PM - 5:40 PM EDT

Space’s role in the DAF BATTLE NETWORK (Plenary Presentation)
Presenter(s): Raj Agrawal , Military Deputy for Space, Department of the Air Force Portfolio Acquisition Executive for Command, Control, Communications and Battle Management (DAF PAE C3BM) (United States)
27 April 2026 • 5:40 PM – 6:20 PM EDT

Science and technology determines the future (Plenary Presentation)
Presenter(s): Stacie Williams, Chief Science Officer, Headquarters United States Space Force (United States)
27 April 2026 • 6:20 PM – 7:00 PM EDT

Symposium Panel on Counter Unmanned Systems: Challenges, Opportunities, and Crossover Technology
28 April 2026 • 8:30 AM - 10:00 AM EDT | Potomac A

View Full Details: spie.org/ds/symposium-panel

Unmanned systems increasingly pose concerns to defense and security across the globe with their rapidly evolving, diverse range of platform, sensor, sensing, command and control, autonomy, and attack technologies; mass-production; and widespread adoption. Please join our illustrious panelists and moderator as we discuss existing and future challenges, opportunities, and crossover technologies to counter unmanned systems at this symposium-wide panel.

Break
Coffee/Exhibition Break 10:00 AM - 11:00 AM
Opening Remarks
28 April 2026 • 11:00 AM - 11:10 AM EDT | National Harbor 7
Session Chair: Kimberly E. Manser, DEVCOM C5ISR (United States)
Opening remarks for Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications IV.
Session 4: Keynote Session
28 April 2026 • 11:10 AM - 12:10 PM EDT | National Harbor 7
Session Chair: Kimberly E. Manser, DEVCOM C5ISR (United States)
14029-11
Author(s): Kimberly E. Manser, DEVCOM C5ISR (United States)
28 April 2026 • 11:10 AM - 12:10 PM EDT | National Harbor 7
Show Abstract + Hide Abstract
Dr. Manser will discuss her work as Subject Matter Expert in Synthetic Data for the Army's DEVCOM C5ISR Center, focusing on applications for Defense in Computer Vision and related AI/ML tasks. The talk centers on the complexity of synthetic data generation, efforts to improve fidelity and performance, and a diverse array of current applications spaces.
Break
Lunch/Exhibition Break 12:10 PM - 4:00 PM
Session 5: Session of Interest on Artificial Intelligence: Joint Session with Conferences 14029 and 14031
28 April 2026 • 4:00 PM - 5:00 PM EDT | National Harbor 10
Session Chairs: Kenny Chen, Lockheed Martin Missiles and Fire Control (United States), Celso M. De Melo, DEVCOM Army Research Lab. (United States)
14029-12
Author(s): Josh Walters, Matthew Mills, Dylan Stewart, David Riquelmy, Stuart Fowler, Torch Technologies, Inc. (United States)
28 April 2026 • 4:00 PM - 4:20 PM EDT | National Harbor 10
Show Abstract + Hide Abstract
Infrared (IR) detection and tracking technologies are advancing rapidly with the integration of artificial intelligence and machine learning (AI/ML). While these tools enhance capability and automation, they also present challenges in ensuring consistent and transparent system performance across diverse operational conditions. Reliable AI/ML training depends on large, high-quality datasets, yet real IR data are often limited. Synthetic data offers a promising solution, but their influence on model performance and trustworthiness remains uncertain. This work compares AI object detectors trained using the empirical ATR Algorithm Development Image Database (ADID) and a synthetic counterpart developed by Perez, Vanstone, et. al. Leveraging explainable AI (XAI) tools the study evaluates how synthetic data may affect model decisions. Results contribute to a framework for validating AI/ML systems that use synthetic data, promoting greater transparency, reliability, and confidence in future IR sensing applications.
14029-13
Author(s): David A. Vaitekunas, W. R. Davis Engineering, Ltd. (Canada)
28 April 2026 • 4:20 PM - 4:40 PM EDT | National Harbor 10
Show Abstract + Hide Abstract
The naval ship infrared signature model and naval threat countermeasure simulator (ShipIR/NTCS) developed by W.R. Davis Engineering Ltd. has undergone extensive validation since its adoption as a NATO-standard and through US Navy Accreditation for Live Fire Test and Evaluation of the DDG-79 (Flight IIA) Aegis Guided Missile Destroyer and Contract Design of the DDG-1000 (USS Zumwalt) Guided Missile Destroyer. Some of the features that make this tool well suited for input to AI / ML applications is its fully-deterministic approach to thermal / EOIR modelling. With the delivery of a turn-key ShipIR thermal and EOIR model of each class of ship, only a few climatic and ship operational inputs are required to construct a full operating EOIR scenario with both transient background and ship signature conditions.This paper will describe the overall framework but also its scene rendering and scenario model updating capabilities. It is hoped that new use case evaluations might result from the presentation of these details to a wider Artificial Intelligence and Machine Learning Community.
14031-19
Author(s): Chris Mesterharm, Noah Guilbault, Ritu Chadha, Constantin Serban, Razvan Stefanescu, Peraton Labs (United States)
28 April 2026 • 4:40 PM - 5:00 PM EDT | National Harbor 10
Show Abstract + Hide Abstract
Modern automated target recognition (ATR) systems can use a diverse set of modalities to identify targets. Of particular importance are infrared emissions for night-time detection. In this paper, we describe a physically implemented infrared camouflage system that uses a limited number of temperature-adjustable panels that can be placed on a vehicle and adversarially trained to evade ATR by changing the infrared emitted by the panels. Testing with a YOLO ATR with and without the camouflage activated, we consistently degrade detection from 90% to below 52% confidence. This was achievable even when only activating 14 of the 24 available panels. This improves upon previous work by creating a cost-effective solution that is optimized to avoid ATR instead of attempting to modify the infrared pattern of the entire target.
Opening Remarks
29 April 2026 • 8:40 AM - 8:50 AM EDT | National Harbor 7
Session Chair: Kimberly E. Manser, DEVCOM C5ISR (United States)
Opening remarks for Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications IV.
Session 6: Generative AI
29 April 2026 • 8:50 AM - 10:10 AM EDT | National Harbor 7
Session Chair: Keith F. Prussing, Georgia Tech Research Institute (United States)
14029-15
Author(s): James Son, Virginia Tech (United States); Won Hee Koh, The University of Arizona (United States); Yungjun Yoo, Johns Hopkins University Applied Physics Lab (United States)
29 April 2026 • 8:50 AM - 9:10 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
Adverse weather, such as heavy fog, rain, or snowstorm reduces the performance of object detection in surveillance systems. Collecting diverse real‑world data for these rare events is difficult, limiting model robustness. To address this, we propose a generative‑AI‑based coherent data augmentation framework that synthetically simulates realistic weather effects on surveillance footage. Using image‑to‑video generation, prompt engineering, and cascading pipelines, our method produces photorealistic data for benchmarking and developing detectors capable of sustaining accuracy under challenging environmental conditions.
14029-16
Author(s): Ella P. Fokkinga, Jan Erik van Woerden, Thijs A. Eker, Sebastiaan P. Snel, Elfi IS Hofmeijer, Klamer Schutte, Friso G. Heslinga, TNO (Netherlands)
29 April 2026 • 9:10 AM - 9:30 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
This study explores using diffusion-based synthetic data to improve military vehicle detection in low-data environments. We fine-tuned FLUX.1 [dev] via LoRA using only 8 or 24 images per class to generate training samples for an RF-DETR detector. Results show synthetic data significantly boosts performance, yielding up to +8.0% mAP50 in the 8-sample regime. Additionally, incorporating structural guidance via ControlNet provided further gains (+4.1% mAP50) under extreme scarcity. Our findings demonstrate that object-specific generative models offer a powerful, data-efficient alternative to traditional simulation pipelines for training military AI systems when real-world data is limited.
14029-18
Author(s): Nicolas Hueber, Alexander Pichler, Damien Delmas, Etienne Bieber, Institut Franco-Allemand de Recherches de Saint-Louis (France)
29 April 2026 • 9:30 AM - 9:50 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
This work presents an indoor proving ground framework for training and evaluating air-to-ground vehicle detection models. To avoid costly outdoor acquisition campaigns, we reuse miniature military vehicle models—previously employed for semi-synthetic training—within a physically realistic diorama that simulates diverse environments and conditions such as occlusions, clutter, smoke, and camouflage. Images are captured using real camera systems, ensuring realistic sensor characteristics. By employing identical miniature vehicles in both training and testing, the approach reduces the domain gap between synthetic and real data, enabling focused investigation of model performance under complex scenes and between visually similar military vehicles. The framework provides a cost-effective, scalable solution for developing robust AI-based vehicle detection, recognition and identification.
14029-27
Author(s): Prachet Upadrashta, Jared M. West, Sean M. Coffey, U.S. Military Academy (United States)
29 April 2026 • 9:50 AM - 10:10 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
This work evaluates the efficacy of modern detection techniques against diffusion-generated satellite imagery in restrictive battlefield conditions. This approach evaluates common open-source techniques such as comparing semantic embeddings and evaluating image residual image noise, as well as a novel hybrid classification framework that combines semantic and forensic features. Real satellite images are compared against Stable Diffusion 3.5–generated imagery subjected to realistic levels of expected degradation from corruption, compression, and other negative artifacts common to the battlefield. This work also evaluates the resource burden inherent in each of the evaluated techniques to help quantize the feasibility of each in a constrained environment. Results show that semantic features dominate detection performance, while traditional PRNU methods largely fail on diffusion imagery. The hybrid approach improves calibration and confidence, highlighting limitations of classical forensics and the need for multi-modal detection strategies in ISR contexts.
Session 7: Integrated Machine Learning and Synthesis Pipelines
29 April 2026 • 10:10 AM - 11:30 AM EDT | National Harbor 7
Session Chair: Kimberly E. Manser, DEVCOM C5ISR (United States)
14029-19
Author(s): Michael A. Mardikes, Brian C. McGuigan, Michael Darling, Alan H. Hesu, Sandia National Labs. (United States)
29 April 2026 • 10:10 AM - 10:30 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
Reliable electro-optical UAS sensing is challenged by low-pixels-on-target imagery and confusion between UAS and avian species, where operationally important regimes are sparsely represented in labeled data. This paper describes methodological building blocks for a closed-loop synthetic data generation (SDG) capability, focusing on feedback signals that prioritize which synthetic scenarios to generate next. We combine (i) regime-conditioned performance analysis using metadata-driven grouping and quantile binning over SDG parameters (e.g., range and location) with (ii) ensemble-based uncertainty quantification to identify parameter regions associated with elevated error and uncertainty. We further include a simulation-to-real fidelity benchmark using paired real--synthetic replicas and image similarity metrics (SSIM, CW-SSIM, FSIM) to contextualize synthetic-data quality and expected transfer limitations. The resulting framework provides a practical strategy for selecting targeted synthetic augmentations and defining hypotheses to be tested in future end-to-end closed-loop iterations. SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525
14029-20
Author(s): Ian McKechnie, Leslie M. Collins, John Board, Duke Univ. (United States); Jordan Malof, Univ. of Missouri (United States)
29 April 2026 • 10:30 AM - 10:50 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
The human visual system takes only seconds to learn to identify new or modified objects, but it takes computer vision models months to years to do the same thing. Synthetic data promises to resolve this difference, but currently it still takes months to generate useful synthetic datasets even with significant amounts of compute. This time delay and resource requirement means that military units can not adapt their AI targeting and autonomous systems at the edge. We demonstrate that using our fully open source SIMPL data generation method that existing overhead detection and classification models for computer vision tasks can be fine tuned to adapt to and/or recognize novel targets in under 24 hours with minimal compute. This period includes generation of training data and model training/testing. This revolutionary jump in speed enables military units to adapt their targeting systems in the field and at the speed of relevance enabling them to remain effective in the face of a constantly changing adversary. We provide an ablation on the performance achievable in a 24-96 hour adaption period (e.g., standard military mission planning periods) based on available compute hardware: CPU
14029-21
Author(s): Joseph Greene, Georgia Tech Research Institute (United States); Alfred Moore, Wyant College of Optical Sciences (United States); Iris Ochoa, Emily Kwan, Patrick Marano, Georgia Institute of Technology (United States); Christopher R. Valenta, Georgia Tech Research Institute (United States)
29 April 2026 • 10:50 AM - 11:10 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
Optical turbulence degrades free-space optical systems and is costly to model accurately. Here, we present TurPy, a high-fidelity, differentiable wave optics simulator for generating synthetic turbulence data. Using a memory-efficient split-step autoregressive algorithm with compressed phase screens, TurPy captures key atmospheric effects like wind, convection, and temporal correlation. It connects seamlessly to deep learning workflows to train downstream algorithms as well as enabling end-to-end optimization of system parameters via gradient-based methods. Validated against theoretical models with over 95% accuracy, TurPy also automates phase screen placement and demonstrates preliminary wavefront optimization. This tool bridges physics-based simulation and AI to enhance data generation, system design, and optical performance under turbulence.
14029-22
Author(s): Alexander Li, Massachusetts Institute of Technology (United States); Jared Augsburger, Nathan Jones, Air Force Research Lab. (United States)
29 April 2026 • 11:10 AM - 11:30 AM EDT | National Harbor 7
Show Abstract + Hide Abstract
Inferring initial parameters from an output is a major challenge for "black-box" generative models, as a single output can map to multiple valid inputs. We introduce a novel framework using a conditional denoising diffusion model to predict a multimodal distribution of rendering parameters from a 2D image. By analyzing visual cues like shadows and perspective, our model learns the complex mapping between an image and its source parameters. This approach allows us to generate a diverse, yet accurate range of plausible parameter sets from a single image, effectively capturing the problem's inherent ambiguity. The method is demonstrated by inferring camera and lighting settings used to render a 3D vehicle.
Conference Chair
DEVCOM C5ISR (United States)
Conference Chair
DEVCOM C5ISR (United States)
Conference Chair
DEVCOM Army Research Lab. (United States)
Conference Co-Chair
DEVCOM Army Research Lab. (United States)
Program Committee
Univ. of Missouri (United States)
Program Committee
Johns Hopkins Univ. (United States)
Program Committee
Univ. of Maryland, College Park (United States)
Program Committee
Georgia Tech Research Institute (United States)
Program Committee
Naval Information Warfare Ctr. Pacific (United States)
Program Committee
Covar, LLC (United States)
Program Committee
Covar, LLC (United States)
Program Committee
Air Force Research Lab. (United States)
Additional Information

View call for papers


Download Social Media Toolkit