Automatic Target Recognition (ATR) is at an inflection point. Foundation‐scale vision transformers, label‑efficient learning, generative diffusion pipelines, cross‑sensor fusion graphs, and ultra‑low‑power edge hardware are rapidly reshaping how targets are detected, tracked, and identified across electro‑optical (EO), infrared (IR), radar (SAR), LiDAR, hyperspectral, neuromorphic (event cameras), acoustic, and emerging sensing modalities. The ATR XXXVI conference invites original research that advances this multidomain, AI‑enabled frontier while meeting real‑world constraints of trust, robustness, and sustainability.
Special themes for 2026
- Foundation and multimodal models : vision transformers, vision‑language models, or large graph/ fusion networks trained across EO/IR/RF/HSI/SAR
- Label‑efficient and continual learning : self‑supervised, few‑/zero‑shot, class‑incremental, or federated methods that thrive with limited or evolving data
- Generative AI and synthetic data : diffusion or other generative approaches for data augmentation, domain adaptation, or digital‑twin pipelines
- Edge and neuromorphic ATR : energy‑efficient algorithms, event‑based sensors, and SWaP‑constrained deployments (<1 W NPUs, FPGAs, or neuromorphic MCUs)
- Trustworthy and physics‑aware AI : explainability, uncertainty quantification, adversarial robustness, and models that embed phenomenology
- Sustainable ATR : low‑carbon training, resource‑aware deployment, and lifecycle impact.
Submission need not be limited to these themes; they highlight timely opportunities for cross‑pollination.
Solicited topics:
Machine learning for ATR
- foundation‑scale and transformer‑based vision models
- vision‑language and contrastive multimodal learning (RGB + SWIR/MWIR/LWIR/HSI/RF)
- few‑shot, self‑supervised, and class‑incremental learning
- generative AI for synthetic data, domain adaptation, and super‑resolution
- adversarial, interpretable, and physics‑aware networks
- quantum, spiking, and neuromorphic approaches.
Multi-sensor and cross‑modal fusion
- graph neural networks, cross‑attention, and joint embeddings for EO/IR/SAR/HSI/LiDAR
- event‑frame fusion for asynchronous sensors
- real‑time fusion on edge and airborne platforms.
EO, IR, and event camera systems
- detection, tracking, recognition, segmentation
- IR/Visible fusion optimized for downstream tasks
- phenomenological and physics‑based modeling
- passive autonomous navigation and long‑range ISR.
Hyperspectral/multispectralsystems
- target detection, object‑level change detection, and material identification
- polarization diversity and adaptive waveform design
- high‑throughput processing architectures.
Radar,LiDAR,andsonarsystems
- high‑range‑resolution and ultra‑wideband techniques
- joint tracking‑classification and mission‑adaptive waveforms
- doppler/polarization diversity; cognitive radar concepts.
Algorithms, architectures and tools
- information‑theoretic approaches, wavelets, and manifold learning
- distributed centralized sensor decision‑making
- digital‑twin environments for algorithm verification
- edge hardware acceleration (NPUs, FPGAs, neuromorphic cores)
- benchmarks, open datasets, and standardized evaluation.
Mission applications and human factors
- wide‑area search, pattern‑of‑life, and activity inference
- human and machine teaming, analyst‑in‑the‑loop interfaces
- trust, ethics, policy, and operational deployment lessons.
Panel discussion: Machine Learning for Automatic Target Recognition (ML4ATR 2026)
Continuing our successful ML4ATR series, we will convene experts to discuss breakthroughs in label‑efficient learning, multimodal fusion, generative synthetic data, and trustworthy AI. Panelists will discuss current trends in AI/ML in ATR followed by Q&A.
Joint session with Infrared Technology and Applications and Synthetic Data for AI/ML conferences
A joint AI/ML session will explore IR‑centric ATR across defense and commercial domains.
Best Paper and Best Student Paper Awards
To be eligible for this award, you must submit a manuscript, be accepted for an oral presentation, and you or your co-author must present your paper on-site. All students are eligible if the abstract was accepted during the academic year the student graduated. Students are required to be enrolled in a university degree granting program. Manuscripts will be judged on technical merit, presentation/speaking skills, and audience interaction. Winners will be announced after the meeting and will be included in the proceedings. All winners will receive an award certificate and recognition on SPIE.org.
;