26 - 30 April 2026
National Harbor, Maryland, US

Automatic Target Recognition (ATR) is at an inflection point. Foundation‐scale vision transformers, label‑efficient learning, generative diffusion pipelines, cross‑sensor fusion graphs, and ultra‑low‑power edge hardware are rapidly reshaping how targets are detected, tracked, and identified across electro‑optical (EO), infrared (IR), radar (SAR), LiDAR, hyperspectral, neuromorphic (event cameras), acoustic, and emerging sensing modalities. The ATR XXXVI conference invites original research that advances this multidomain, AI‑enabled frontier while meeting real‑world constraints of trust, robustness, and sustainability.

Special themes for 2026
  1. Foundation and multimodal models : vision transformers, vision‑language models, or large graph/ fusion networks trained across EO/IR/RF/HSI/SAR
  2. Label‑efficient and continual learning : self‑supervised, few‑/zero‑shot, class‑incremental, or federated methods that thrive with limited or evolving data
  3. Generative AI and synthetic data : diffusion or other generative approaches for data augmentation, domain adaptation, or digital‑twin pipelines
  4. Edge and neuromorphic ATR : energy‑efficient algorithms, event‑based sensors, and SWaP‑constrained deployments (<1 W NPUs, FPGAs, or neuromorphic MCUs)
  5. Trustworthy and physics‑aware AI : explainability, uncertainty quantification, adversarial robustness, and models that embed phenomenology
  6. Sustainable ATR : low‑carbon training, resource‑aware deployment, and lifecycle impact.

Submission need not be limited to these themes; they highlight timely opportunities for cross‑pollination.

Solicited topics:

Machine learning for ATR

Multi-sensor and cross‑modal fusion

EO, IR, and event camera systems

Hyperspectral/multispectralsystems

Radar,LiDAR,andsonarsystems

Algorithms, architectures and tools

Mission applications and human factors

Panel discussion: Machine Learning for Automatic Target Recognition (ML4ATR 2026)

Continuing our successful ML4ATR series, we will convene experts to discuss breakthroughs in label‑efficient learning, multimodal fusion, generative synthetic data, and trustworthy AI. Panelists will discuss current trends in AI/ML in ATR followed by Q&A.

Joint session with Infrared Technology and Applications and Synthetic Data for AI/ML conferences

A joint AI/ML session will explore IR‑centric ATR across defense and commercial domains.

Best Paper and Best Student Paper Awards

To be eligible for this award, you must submit a manuscript, be accepted for an oral presentation, and you or your co-author must present your paper on-site. All students are eligible if the abstract was accepted during the academic year the student graduated. Students are required to be enrolled in a university degree granting program. Manuscripts will be judged on technical merit, presentation/speaking skills, and audience interaction. Winners will be announced after the meeting and will be included in the proceedings. All winners will receive an award certificate and recognition on SPIE.org.

;
In progress – view active session
Conference 14031

Automatic Target Recognition XXXVI

27 - 29 April 2026 | National Harbor 5
Show conference sponsors + Hide conference sponsors
View Session ∨
  • Opening Remarks
  • 1: Physics-Driven Intelligence: Classical Methods Reimagined for Modern ATR
  • 2: Synthetic Worlds, Real Impact: Robust ATR Through Domain Gaps, Privacy, and OOD Defense
  • Symposium Plenary
  • Symposium Panel on Counter Unmanned Systems: Challenges, Opportunities, and Crossover Technology
  • 3: Beyond Pixels: Multimodal Reasoning With VLMs, LLMs, and Geospatial Intelligence
  • 4: Advanced Sensing and Fast ATR: LiDAR, Radar, and Neuromorphic Event Processing and Neuromorphic Event Sensing
  • Remarks
  • 5: Session of Interest on Artificial Intelligence: Joint Session with Conferences 14029 and 14031
  • Opening Remarks
  • 6: Panel Discussion: Machine Learning for Advanced Target Recognition (ML4ATR)
  • Remarks
  • 7: Artificial Intelligence II: Joint Session with Conferences 14031 and 14037
Opening Remarks
27 April 2026 • 8:20 AM - 8:30 AM EDT | National Harbor 5
Session Chair: Timothy L. Overman, Prime Solutions Group, Inc. (United States)
Opening remarks for Automatic Target Recognition XXXVI.
Session 1: Physics-Driven Intelligence: Classical Methods Reimagined for Modern ATR
27 April 2026 • 8:30 AM - 9:50 AM EDT | National Harbor 5
Session Chair: Timothy L. Overman, Prime Solutions Group, Inc. (United States)
14031-1
Author(s): Asem Hassan, University of Arizona (United States); Mohamed Elkabbash, The Univ. of Arizona (United States)
27 April 2026 • 8:30 AM - 8:50 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
We present a physics-based framework for automatic target recognition that combines the robustness of the Generalized Likelihood Ratio Test (GLRT) with a new sequential estimation strategy. GLRT is a proven detection method that also provides parameter estimates, but its high computational cost limits real-time use. Our approach introduces “superparameters”—intermediate variables capturing key aspects of the target state—that are estimated first, reducing complexity before full parameter recovery. This enables low-latency, low-power detection and characterization of targets in EO/IR and other sensing modalities, while maintaining GLRT’s performance in low-SNR conditions. The method is applicable to defense and surveillance missions requiring both accurate detection and rapid parameter estimation in operational environments.
14031-2
Author(s): Patrick Schuetterle, Northrop Grumman Corp. (United States); Aarya Riasati, Caltech (United States); Vahid R. Riasati, Aerospace Corp (United States)
27 April 2026 • 8:50 AM - 9:10 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Digital image enhancement is a field that incorporates prediction and estimation based on local and global features and the entire image. Artificial intelligent techniques require training based on known information and can be a good tool for image enhancement, however, if the learning process is incomplete or corrupted based on faulty and irrelevant images these techniques can faulter. In this work we investigate a robust predictive technique, the Kalman filter, KF, often used for tracking and guidance applications to evaluate its performance for image enhancement and recovery. To expand our work, we have taken a few liberties with the KF and utilized a Bayesian approach along with a Hidden Markov Model estimator to improve the performance of the KF. Other statistical techniques have also been investigated based on probability density function estimation and its application with the KF to improve the filter’s performance given that the image’s structural information may be utilized for these purposes.
14031-3
Author(s): Ismail I. Jouny, Lafayette College (United States)
27 April 2026 • 9:10 AM - 9:30 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
This paper examines the effects of denoising on radar target identification. Three different denoising techniques are examined 1) Denoising using wavelet decomposition, 2) Denoising using Convolutional Autoencoders, and 3) Denoising using generative adversarial networks. The paper also addresses the question whether any denoising improves target recognition performance. Real stepped-frequency radar is used to test these denoising algorithms. The data represents backscatter from commercial aircraft models as recorded in a compact range. The issue of azimuth ambiguity is also considered. Computational cost and speed are also considered. Time needed to denoise the radar backscatter is of particular importance because of the urgency to make an identification decision in a real battle scenario.
14031-4
Author(s): Johannes Bauer, The Univ. of Texas at San Antonio (United States); Efrain Gonzalez, Craig M. Vineyard, Sandia National Labs. (United States); William Severa, The Univ. of Texas at San Antonio (United States)
27 April 2026 • 9:30 AM - 9:50 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Deep neural networks are powerful tools that allow one to achieve state of the art performance for target recognition on SAR. Recent literature on SAR target recognition has heavily pursued the use of large, complex neural networks trained on increasingly larger classification datasets. To justify the use of these models, a common assumption is that SAR datasets are sufficiently difficult to learn. Here, we explore this basis. By using classifiers like K-Nearest Neighbor in conjunction with simple image processing, we first show that basic ML models can achieve similar accuracies to DNNs. These accuracies give a baseline for evaluating how much complex models help for improving accuracy on SAR target recognition. Overall, our findings motivate new questions around the cost benefit tradeoffs of large models and the true difficulty of the common benchmark datasets.
Break
Coffee Break 9:50 AM - 10:20 AM
Session 2: Synthetic Worlds, Real Impact: Robust ATR Through Domain Gaps, Privacy, and OOD Defense
27 April 2026 • 10:20 AM - 11:50 AM EDT | National Harbor 5
Session Chair: Kristen Jaskie, Prime Solutions Group, Inc. (United States)
14031-6
Author(s): William Severa, University of Texas at San Antonio (United States); Johannes Bauer, The Univ. of Texas at San Antonio (United States); Craig M. Vineyard, Sandia National Laboratories (United States)
27 April 2026 • 10:20 AM - 10:40 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Compromised target recognition systems could potentially leak important information about their training data. Preventing such a compromise is critical in maintaining operational security. As a potential mitigation, differential privacy methods can provide theoretical guarantees on dataset privacy. In this work, we examine how differentially private methodologies can be integrated into an ATR pipeline. In particular, we focus on the process of generating ATR templates that are differentially private as a means of protecting hypothetically sensitive training data. We provide an analysis of synthetic aperture radar templates using a distributional model, common in statistical ATR methods such as multinomial pattern matching. Our work provides an analysis of both the template generation itself as well as statistical tests derived from the generated templates, and we observe the steep costs of high-dimensional templates. By understanding the effect differential privacy has on template generation, we can begin to characterize the performance/privacy trade-offs of differentially private ATR.
14031-7
Author(s): Donald Waagen, Leidos (United States); Don Hulsey, Dynetics (United States); David Gray, Keefa Nelson, Air Force Research Lab (United States); Katie Rainey, Erin Hausmann, Naval Information Warfare Center Pacific (United States)
27 April 2026 • 10:40 AM - 11:00 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Understanding the relationships between data points in the latent decision space derived by the deep learning system is critical to evaluating and interpreting the performance of the system on real world data. Detecting “Out- of-Distribution” (OOD) data for deep learning systems continues to be an active research topic. We investigate nonparametric online and batch approaches for estimating distributional separation or “outlierness”. Using open source simulated and measured Synthetic Aperture RADAR (SAR) datasets, we empirically demonstrate that the concepts of OOD and “Out-of-Task” are not synonymous.
14031-9
Author(s): Christopher Pitts, University of New Mexico (United States), Sandia National Laboratories (United States); Devin White, Sandia National Labs. (United States); Trilce Estrada, Gruia-Catalin Roman, The Univ. of New Mexico (United States)
27 April 2026 • 11:00 AM - 11:20 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Out-of-distribution (OOD) detection is an important part of automatic target recognition (ATR) systems. The capability to reject unknown classes improves reliability and trust in an ATR, and permits the use of otherwise closed-set classifiers where open-set recognition is necessary. In this paper we present multinomial feature matching (MFM), a method for detecting OOD data in the latent feature space of neural classifiers, and apply it to models trained on the SAMPLE+ dataset. We show that MFM has efficient and low-overhead runtime characteristics, and that it exhibits good performance when applied to OOD targets in SAMPLE+ and other SAR datasets, including both vehicle targets and clutter.
14031-28
Author(s): Eric M. Sturzinger, U.S. Army Artificial Intelligence Integration Ctr. (United States)
27 April 2026 • 11:20 AM - 11:50 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Automatic Target Recognition (ATR) in tactical and remote sensing scenarios frequently encounters novel target classes absent from labeled training data, while multimodal sensors (EO, SAR, IR, radar) offer complementary cues for robust detection. Conventional unimodal or supervised methods struggle to discover and categorize these unseen targets in unlabeled streams. We propose a cross-modal novel class discovery framework leveraging a JEPA-based approach to learn independent embeddings across multiple modalities, enabling label-free identification of novel targets. Modality-specific encoders generate parallel embeddings and project them into latent space. Novel classes are discovered through sustained energy spikes in unimodal temporal prediction and spikes in cross-modal prediction. The framework provides novel class discovery resilient to sensor noise, occlusion, and deception. This work advances scalable, adaptive ATR for dynamic environments with emerging threats.
Symposium Plenary
27 April 2026 • 5:30 PM - 7:00 PM EDT | Potomac A

View Full Details: spie.org/ds/symposium-plenary

Chair welcome and introduction
27 April 2026 • 5:30 PM - 5:40 PM EDT

Space’s role in the DAF BATTLE NETWORK (Plenary Presentation)
Presenter(s): Raj Agrawal , Military Deputy for Space, Department of the Air Force Portfolio Acquisition Executive for Command, Control, Communications and Battle Management (DAF PAE C3BM) (United States)
27 April 2026 • 5:40 PM – 6:20 PM EDT

Science and technology determines the future (Plenary Presentation)
Presenter(s): Stacie Williams, Chief Science Officer, Headquarters United States Space Force (United States)
27 April 2026 • 6:20 PM – 7:00 PM EDT

Symposium Panel on Counter Unmanned Systems: Challenges, Opportunities, and Crossover Technology
28 April 2026 • 8:30 AM - 10:00 AM EDT | Potomac A

View Full Details: spie.org/ds/symposium-panel

Unmanned systems increasingly pose concerns to defense and security across the globe with their rapidly evolving, diverse range of platform, sensor, sensing, command and control, autonomy, and attack technologies; mass-production; and widespread adoption. Please join our illustrious panelists and moderator as we discuss existing and future challenges, opportunities, and crossover technologies to counter unmanned systems at this symposium-wide panel.

Break
Coffee/Exhibition Break 10:00 AM - 11:00 AM
Session 3: Beyond Pixels: Multimodal Reasoning With VLMs, LLMs, and Geospatial Intelligence
28 April 2026 • 11:00 AM - 12:00 PM EDT | National Harbor 5
Session Chair: Kristen Jaskie, Prime Solutions Group, Inc. (United States)
14031-10
Author(s): Marie Chau, Johns Hopkins Univ. Applied Physics Lab., LLC (United States)
28 April 2026 • 11:00 AM - 11:20 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Target detection (TD) using Radar Range-Doppler Maps (RDMs) is a longstanding and critical challenge within the defense sector. Classical target detection approaches can struggle in cluttered environments. With the recent advancements of Vision-Language Models (VLMs), we explore an out-of-domain application of VLMs for target detection that leverages contextual information from human operators in the field. To enable this, we developed a scalable synthetic data generation pipeline for training and evaluation across two tasks: pointing and counting. We illustrate its effectiveness through numerical experiments by comparing an off-the-shelf VLM with a fine-tuned version. Our results show promise of this approach for enhancing radar-based target detection.
14031-11
Author(s): David F. Ramirez, Arizona State University (United States), Prime Solutions Group, Inc. (United States); Tim Overman, Kristen Jaskie, Prime Solutions Group, Inc. (United States); Andreas Spanias, Arizona State Univ. (United States)
28 April 2026 • 11:20 AM - 11:40 AM EDT | National Harbor 5
Show Abstract + Hide Abstract
Understanding Earth’s dynamic changes requires automated recognition systems capable of temporal reasoning over remote sensing data. Extending the IARPA SMART heavy-construction dataset, we integrate newly generated text captions to describe evolving geospatial events. By fusing geo-INT image sequences, we train a multimodal large language model (MLLM) to generate descriptive captions and predict future target states. Our unique GPT-based transformer architecture includes a 7-billion-parameter model fine-tuned from a general foundation. We address challenges of sparse satellite collections through data-efficient training and visual question answering (VQA) data augmentation. Building upon prior work in SAR-based ATR and VQA, our results demonstrate that modern MLLMs can generalize across time-dependent sensing domains, advancing geospatial-temporal analysis for improved safety, security, and sustainability.
14031-12
Author(s): Sergey E. Lyshevski, Rochester Institute of Technology (United States); Serhii Kovbasa, Anton Holosha, National Technical Univ. of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” (Ukraine)
28 April 2026 • 11:40 AM - 12:00 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
The authors investigate detection, classification, indexing, and automatic tracking of multiple stationary and moving objects in complex dynamic environments. We conduct fundamental and applied research, as well as undertaking technology developments in machine learning and pattern recognition aimed for intelligence, surveillance, and reconnaissance (ISR). On mini unmanned aerial systems, the proof-of-concept onboard actionable intelligence-centric machine vision module is implemented using ultra-compact low-power single-board computer, cameras, and transceiver. The developed learning algorithms are processed by single-stage end-to-end fully convolutional neural networks (CNNs). The modified variants of CNNs are investigated. The novelty lies in the use of new informative and regularizable loss functions with consequent hyperparameter optimization, as well as devised postprocessing observers aimed to perform robust indexing and dynamic tracking. The data-driven algorithms and factorizations yield trustworthy learning models, improving overall capabilities and enable decision superiority. For quantitative assessment, the ISR-quantifiable metrics are found in relevant environments.
Break
Lunch/Exhibition Break 12:00 PM - 1:30 PM
Session 4: Advanced Sensing and Fast ATR: LiDAR, Radar, and Neuromorphic Event Processing and Neuromorphic Event Sensing
28 April 2026 • 1:30 PM - 3:10 PM EDT | National Harbor 5
Session Chair: Timothy L. Overman, Prime Solutions Group, Inc. (United States)
14031-16
Author(s): Steven Senczyszyn, Michigan Technological Univ. (United States); Ian D. Helman, Michigan Tech Research Institute (United States); Timothy Havens, Michigan Technological Univ. (United States); Adam J. Webb, Michigan Tech Research Institute (United States); Steven R. Price, U.S. Army Engineer Research and Development Ctr. (United States)
28 April 2026 • 1:30 PM - 1:50 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
Operational synthetic aperture radar (SAR) automatic target recognition (ATR) requires rejecting non-target objects while maintaining classification accuracy on known targets, but standard models trained on the MSTAR and SAMPLE datasets are prone to learning background shortcuts that undermine both goals. We establish this through target–background decomposition and dual saliency analysis: baseline domain classifiers achieve high accuracy by reading background texture, not target signatures. To correct this, we develop an integrated multi-task ATR pipeline combining chimera augmentation, base triplet loss, and chimera paired triplet mining, validated through a component ablation. The multi-task ATR pipeline redirects the domain head from background to target, and critically, domain accuracy survives even as domain clustering vanishes in the embedding space, indicating the domain head has shifted from background texture to target-level scattering differences. Baseline models also achieve perfect false alarm detection, but via background texture. A stress test breaks this ceiling, confirming that the pipeline’s detection is grounded in target signatures rather than background cues.
14031-14
Author(s): Riccardo Consolo, The MathWorks, Inc. (United States); Daniel Carvalho, DCS Corp (United States); Abhijit Bhattacharjee, The MathWorks, Inc. (United States); Jarrod Brown, Air Force Research Lab. (United States); Art Lompado, Polaris Sensor Technologies, Inc. (United States)
28 April 2026 • 1:50 PM - 2:10 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
Long-range perception at high frame rates remains a challenge for autonomous systems, with limited open-source resources. We present a long-range, narrow-FOV flash LiDAR dataset for 2D/3D vehicle detection/segmentation: over 70,000 frames at 20 Hz up to 1000 m, covering eight vehicle classes (ground vehicles and a light aircraft). Frames include 3D point clouds, fused range–intensity images, five per-point features, ground-truth 2D bounding boxes, 3D cuboids, and semantic masks. We release a reproducible benchmarking suite with preprocessing, training, and evaluation scripts in an open-source repository. Included are standard 2D/3D detection and segmentation DNNs trained and tested on the dataset. We evaluate detection and classification accuracy across object classes and range bins, along with throughput and latency for real-time use. Results demonstrate robust long-range perception using the LiDAR system and reveal sensitivities to range, target aspect, point density, and SNR. We also include pipelines for vehicle tracking, multi-view registration for 360° modeling, and embedded deployment. Dataset and code will be publicly released to support advances in long-range ATR.
14031-15
Author(s): Jacob Morrey, Utah State Univ. (United States); Nathan Nelson, Jarrett Parry, Utah State University (United States); Mario Harper, Utah State Univ. (United States)
28 April 2026 • 2:10 PM - 2:30 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
Detecting fast-moving objects in dynamic scenes is a fundamental challenge in event-based vision, with applications ranging from terrestrial surveillance to space-based tracking of Resident Space Objects (RSOs). Conventional frame-based imaging and convolutional pipelines struggle to meet the latency and power requirements of resource-constrained platforms. In this work, we present a scalable method for generating labeled training data by superimposing real event streams from stationary-camera foreground recordings and ego-motion background recordings, and demonstrate that a lightweight CNN trained on this data generalizes to real, unmodified event streams. This provides a practical foundation for future deployment in autonomous detection systems, including space-based applications where event cameras are well-suited to the constraints of spacecraft payloads.
14031-17
Author(s): Megan Birch, Elliot Kantor, Nathan Un, Jason Zutty, Joseph Greene, Georgia Tech Research Institute (United States)
28 April 2026 • 2:30 PM - 2:50 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
Event-based cameras (EBCs) encode pixel-level radiance changes with microsecond latency and high dynamic range, enabling robust motion extraction while suppressing static background clutter. However, their sparse, asynchronous output requires algorithms that operate directly in event space. We present Frequency Rate Information for Event-Space (FRIES), a neuromorphic framework that detects periodic structure in event streams to discriminate man-made objects, such as drones, via rotor and vibration signatures. FRIES temporally filters events, forms activity-based regions of interest, and applies localized Fourier analysis to extract dominant frequencies, which are then tracked using a Resonant Time Surface. Indoor and outdoor experiments demonstrate recovery of mechanical chopper and drone rotor rates from ~74–451 Hz, highlighting frequency-domain event processing as a promising front end for selective surveillance
14031-27
Author(s): David F. Ramirez, Arizona State Univ. (United States), Prime Solutions Group, Inc. (United States); Tim L. Overman, Kristen Jaskie, Joe Marvin, Prime Solutions Group, Inc. (United States); Andreas Spanias, Arizona State Univ. (United States)
28 April 2026 • 2:50 PM - 3:10 PM EDT | National Harbor 5
Show Abstract + Hide Abstract
We present SAR-RAG, an agentic image-retrieval-augmented generation (ImageRAG) framework for automatic target recognition (ATR) in synthetic aperture radar (SAR) imagery. SAR is widely used in defense and security, but visually similar vehicle signatures, speckle, and sensor-dependent effects can complicate interpretation. SAR-RAG combines a multimodal large language model (MLLM) with a vector database of semantic SAR embeddings to enable contextual search over a library of previously observed, labeled exemplars with known target types and associated attributes. During inference, the agent retrieves the most relevant reference images and uses them as an attached “memory bank” to support comparison and reasoning across closely related vehicle categories, improving both categorical identification and quantitative estimation. We evaluate the approach using retrieval quality measures, ATR classification accuracy, and regression of vehicle dimensions, and show consistent improvements over an MLLM-only baseline when retrieval-augmented context is incorporated.
Break
Coffee/Exhibition Break 3:10 PM - 3:50 PM
Remarks
28 April 2026 • 3:50 PM - 4:00 PM EDT | National Harbor 10
Session Chair: Kenny Chen, Lockheed Martin Missiles and Fire Control (United States)
Remarks for Automatic Target Recognition XXXVI.
Session 5: Session of Interest on Artificial Intelligence: Joint Session with Conferences 14029 and 14031
28 April 2026 • 4:00 PM - 5:00 PM EDT | National Harbor 10
Session Chairs: Kenny Chen, Lockheed Martin Missiles and Fire Control (United States), Celso M. De Melo, DEVCOM Army Research Lab. (United States)
14029-12
Author(s): Josh Walters, Matthew Mills, Dylan Stewart, David Riquelmy, Stuart Fowler, Torch Technologies, Inc. (United States)
28 April 2026 • 4:00 PM - 4:20 PM EDT | National Harbor 10
Show Abstract + Hide Abstract
Infrared (IR) detection and tracking technologies are advancing rapidly with the integration of artificial intelligence and machine learning (AI/ML). While these tools enhance capability and automation, they also present challenges in ensuring consistent and transparent system performance across diverse operational conditions. Reliable AI/ML training depends on large, high-quality datasets, yet real IR data are often limited. Synthetic data offers a promising solution, but their influence on model performance and trustworthiness remains uncertain. This work compares AI object detectors trained using the empirical ATR Algorithm Development Image Database (ADID) and a synthetic counterpart developed by Perez, Vanstone, et. al. Leveraging explainable AI (XAI) tools the study evaluates how synthetic data may affect model decisions. Results contribute to a framework for validating AI/ML systems that use synthetic data, promoting greater transparency, reliability, and confidence in future IR sensing applications.
14029-13
Author(s): David A. Vaitekunas, W. R. Davis Engineering, Ltd. (Canada)
28 April 2026 • 4:20 PM - 4:40 PM EDT | National Harbor 10
Show Abstract + Hide Abstract
The naval ship infrared signature model and naval threat countermeasure simulator (ShipIR/NTCS) developed by W.R. Davis Engineering Ltd. has undergone extensive validation since its adoption as a NATO-standard and through US Navy Accreditation for Live Fire Test and Evaluation of the DDG-79 (Flight IIA) Aegis Guided Missile Destroyer and Contract Design of the DDG-1000 (USS Zumwalt) Guided Missile Destroyer. Some of the features that make this tool well suited for input to AI / ML applications is its fully-deterministic approach to thermal / EOIR modelling. With the delivery of a turn-key ShipIR thermal and EOIR model of each class of ship, only a few climatic and ship operational inputs are required to construct a full operating EOIR scenario with both transient background and ship signature conditions.This paper will describe the overall framework but also its scene rendering and scenario model updating capabilities. It is hoped that new use case evaluations might result from the presentation of these details to a wider Artificial Intelligence and Machine Learning Community.
14031-19
Author(s): Chris Mesterharm, Noah Guilbault, Ritu Chadha, Constantin Serban, Razvan Stefanescu, Peraton Labs (United States)
28 April 2026 • 4:40 PM - 5:00 PM EDT | National Harbor 10
Show Abstract + Hide Abstract
Modern automated target recognition (ATR) systems can use a diverse set of modalities to identify targets. Of particular importance are infrared emissions for night-time detection. In this paper, we describe a physically implemented infrared camouflage system that uses a limited number of temperature-adjustable panels that can be placed on a vehicle and adversarially trained to evade ATR by changing the infrared emitted by the panels. Testing with a YOLO ATR with and without the camouflage activated, we consistently degrade detection from 90% to below 52% confidence. This was achievable even when only activating 14 of the 24 available panels. This improves upon previous work by creating a cost-effective solution that is optimized to avoid ATR instead of attempting to modify the infrared pattern of the entire target.
Opening Remarks
29 April 2026 • 8:00 AM - 8:10 AM EDT | National Harbor 5
Session Chair: Timothy L. Overman, Prime Solutions Group, Inc. (United States)
Opening remarks for Automatic Target Recognition XXXVI.
Session 6: Panel Discussion: Machine Learning for Advanced Target Recognition (ML4ATR)
29 April 2026 • 8:10 AM - 11:10 AM EDT | National Harbor 5
Session Chairs: Asif Mehmood, Global InfoTek, Inc. (United States), Matthew D. Reisman, Bedrock Research LLC (United States)
Join the Automatic Target Recognition XXXVI conference for an innovative panel discussion led by experts from government, industry, and academia. This panel will bring together researchers and practitioners to examine advances and identify gaps, including transformer-based methods, multisensor fusion, synthetic data generation, uncertainty-aware decision support, and end-to-end ATR systems spanning data engineering, model development, deployment, and lifecycle management. The goal is to define a path toward ATR systems that are accurate, resilient, explainable, and aligned with future operational needs.

Moderators:
Asif Mehmood, Global InfoTek, Inc.

Panelists:
Vijayan K. Asari, University of Dayton / Vision Laboratory
Olga Mendoza-Schrock, Air Force Research Laboratory
Hunter Moore, Hardshell
Kristen P. Jaskie, Prime Solutions Group
Break
Lunch/Exhibition Break 11:10 AM - 2:10 PM
Remarks
29 April 2026 • 2:10 PM - 2:20 PM EDT | National Harbor 10
Session Chair: Timothy L. Overman, Prime Solutions Group, Inc. (United States)
Remarks for Automatic Target Recognition XXXVI.
Session 7: Artificial Intelligence II: Joint Session with Conferences 14031 and 14037
29 April 2026 • 2:20 PM - 3:00 PM EDT | National Harbor 10
Session Chair: Kenny Chen, Lockheed Martin Missiles and Fire Control (United States)
14037-40
Author(s): Shotaro Miwa, Jia Qu, Mitsubishi Electric Corp. (Japan)
29 April 2026 • 2:20 PM - 2:40 PM EDT | National Harbor 10
Show Abstract + Hide Abstract
Thermal (infrared) perception is important for robust object detection in low-illumination and poor-visibility conditions, especially in automotive scenes. This work studies parameter-efficient adaptation of the open-vocabulary detector YOLO-World from visible to thermal imagery using two complementary strategies: input-space modality prompting (ModPrompt) and weight-space low-rank adaptation (LoRA). Experiments on the FLIR-IR aligned benchmark with three categories—person, bicycle, and car—show that both methods improve over zero-shot transfer, while their combination achieves the best accuracy (AP50 75.5, AP75 36.9, AP 41.1). LoRA offers the best efficiency among the tested settings, whereas LoRA combined with ModPrompt provides the strongest detection performance at higher computational cost. These results highlight a practical trade-off between efficiency and accuracy for thermal open-vocabulary detection.
14031-21
Author(s): Engin Uzun, ASELSAN A.S. (Turkey)
29 April 2026 • 2:40 PM - 3:00 PM EDT | National Harbor 10
Show Abstract + Hide Abstract
Detecting small UAVs in thermal infrared imagery is challenging because targets often occupy only a few pixels, have weak contrast, and appear in cluttered backgrounds. We propose a real-time hybrid detector that runs YOLOv11-s and a multiscale Relative Local Contrast Measure (RLCM) branch in parallel, then fuses their candidate detections at the bounding-box level before lightweight temporal confirmation with a probabilistic data association filter (PDAF). The PDAF associates detections across nearby frames, suppresses isolated clutter responses, and confirms persistent targets. On Anti-UAV410, the proposed method improves AP over YOLOv11-s, with the largest gains on tiny targets, increasing APt from 0.47 to 0.70. On Jetson AGX Orin Industrial, the full pipeline runs in 4.53 ms per frame versus 4.42 ms for YOLOv11-s alone, adding only 0.11 ms latency. These results show that candidate-level fusion of semantic confidence and contrast-based saliency, together with lightweight temporal confirmation, improves tiny infrared UAV detection while preserving real-time embedded performance.
Conference Chair
Lockheed Martin Missiles and Fire Control (United States)
Conference Chair
PlusAI, Inc. (United States)
Conference Chair
Prime Solutions Group, Inc. (United States)
Program Committee
Hunter College (United States)
Program Committee
Wright State Univ. (United States)
Program Committee
Lockheed Martin Corp. (United States)
Program Committee
The Univ. of Arizona (United States)
Program Committee
Global InfoTek, Inc. (United States)
Program Committee
Univ. of Central Florida (United States)
Program Committee
West Virginia Univ. (United States)
Program Committee
Univ. of Houston (United States)
Program Committee
California State Univ., Northridge (United States)
Program Committee
Systems & Technology Research (United States)
Program Committee
Office of Naval Research (United States)
Program Committee
HENSOLDT Sensors GmbH (Germany)
Program Committee
Air Force Research Lab. (United States)
Program Committee
Air Force Research Lab. (United States)
Program Committee
Air Force Research Lab. (United States)
Additional Information

View call for papers


Download Social Media Toolkit