Proceedings Volume 13475

Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping X

cover
Proceedings Volume 13475

Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping X

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 10 June 2025
Contents: 8 Sessions, 23 Papers, 23 Presentations
Conference: SPIE Defense + Commercial Sensing 2025
Volume Number: 13475

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 13475
  • Accurate UAV-based Sensing for Phenotyping and Precision Agriculture
  • UAVs and UGVs for Phenotyping and Precision Agriculture
  • UGV-based Sensing for Phenotyping and Precision Agriculture I
  • UGV-based Sensing for Phenotyping and Precision Agriculture II
  • UGV-based Sensing for Phenotyping and Precision Agriculture III
  • Applications of UAV-based Sensing for Phenotyping and Precision Agriculture
  • Poster Session
Front Matter: Volume 13475
icon_mobile_dropdown
Front Matter: Volume 13475
This PDF file contains the front matter associated with SPIE Proceedings Volume 13475, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Accurate UAV-based Sensing for Phenotyping and Precision Agriculture
icon_mobile_dropdown
Radiometric and modified RossThick-LiSparse BRDF correction for low-altitude UAV data at varying solar-sensor geometries for time-series analysis
Suraj A. Yadav, Nuwan K. Wijewardane, Xin Zhang, et al.
In this research, we proposed a robust radiometric calibration and bidirectional reflectance distribution function (BRDF) correction method utilizing modified RossThick-LiSparse (mRTLS) model for drone-based hyperspectral remote sensing. We employed downwelling solar irradiance for given solar-sensor geometry and empirical line model (ELM) for radiometric calibration. The ELM utilized sensors raw photon counts map and three reference targets radiance map to calibrate its gain and offset value. A Python interface to the second simulation of the satellite signal in the Solar Spectrum (Py6S) was used to simulate downwelling solar irradiance, followed by computing target radiance and reflectance map. The proposed mRTLS model incorporates a new empirical higher-order scattering kernel, inspired to capture the non-linear interaction between volumetric and geometric scattering components. This component represents the combined influence of surface and subsurface effects on reflectance of different land covers. The mRTLS model has been evaluated under varying solar illumination conditions (i.e., at 10:30 AM, 1:00 PM, and 4:15 PM) using data from pure vegetation pixels. The mRTLS model was used to correct the reflectance data observed at 10:30 AM and 4:15 PM and compared with data at solar-noon time(i.e., 1:00 PM), which is known as optimal time of measurement. The correction factors applied to the reflectance map captured at non-optimal time frames were found closely align to the map obtained at optimal time frame. Additionally, the mRTLS outperformed traditional RTLS model, demonstrating significant enhancement in the near-infrared region for vegetation.
Longwave thermal infrared atmospheric correction using in situ scene elements: the multiple altitude technique revisited for small unmanned aircraft systems (sUAS)
Carl Salvaggio, Danny P. Klosinski, Robert A. Mancini, et al.
Quantitative remote sensing using thermal infrared imaging from small unmanned aircraft systems (sUAS) often requires calibration reference targets to correct for atmospheric attenuation. These targets, such as controlled-temperature water baths, are complex to set up, maintain, and monitor, making their use impractical in many field applications. One possible solution is to neglect atmospheric effects, given that most operations occur below 400 feet in the United States, however, the effectiveness of such an approach depends on the task’s required noise-equivalent delta temperature (NEΔT). This study revisits the multiple-altitude calibration technique first proposed by Schott and Gallagher (1976) and applies it to sUAS operations. We demonstrate the technique’s viability using both modeled atmospheric propagation of sensor-reaching radiance with MODTRAN 6 and experimental data from a microbolometer-based thermal infrared camera flown on a multi-modal imaging payload. The experimental results showed average absolute errors of 3.69 and 2.50 [K] at aircraft altitudes of 250 and 120 [m], respectively, when atmospheric effects were neglected. Correcting the sensor reaching radiance for atmospheric transmission and upwelling path radiance, derived using the proposed methodology, produced average absolute errors of 1.29 and 1.27 [K], respectively, at these same altitudes.
Simulated real-time processing and machine learning on GNSS-R data for land-water segmentation in wetlands
Luke Redwine, Qian Du, John Ball
This project presents a machine learning approach for distinguishing water and land surfaces in marsh environments using GNSS-Reflectometry (GNSS-R) data. The focus is on developing and simulating a data processing pipeline that utilizes carrier-to-noise density ratio (C/No) measurements from GNSS signals. The models leverage the difference between reflected and direct signals to classify surface types. To simulate real-time processing, data replay modules were implemented, allowing for dynamic analysis of pre-collected GNSS-R datasets. While the system concept involves dual GNSS sensors on an Unmanned Aircraft System (UAS), this paper concentrates on the machine learning model development and its capability for real-time surface classification. The results highlight the potential of this processing approach for applications in flood monitoring and wetland ecosystem management, demonstrating the feasibility of GNSS-R data for dynamic water-land boundary mapping.
Machine-learning techniques for the detection of powdery mildew in vineyards using aerial and ground imageries
Michael D. Acosta, Mahakbhai S. Patel, Amar Raheja, et al.
Artificial intelligence (AI) and machine learning (ML) are transforming agriculture by enabling automated crop identification and disease detection. This study presents a lightweight ML model for detecting powdery mildew (PM) in grapevines, with a primary focus on UAV-captured imagery for scalable vineyard monitoring. PM is a fungal disease that significantly impacts grape quality, especially in susceptible varieties. The proposed system utilizes a custom dataset and YOLO object detection models (v8n and v10n) to classify grapevine regions as healthy or diseased. Models were trained and evaluated primarily on UAV images, alongside handheld camera data for comparison. The UAV data-trained YOLOv10n model achieved 94% precision, 64% recall, and a 76% F1 score, demonstrating robust performance for largescale deployment. While the handheld data-trained YOLOv10n model achieved higher recall (76%), its limited scalability reinforces UAV imagery as the preferred solution. This approach offers a cost-effective alternative to hyperspectral imaging and is optimized for integration into autonomous vineyard management systems. Future work will focus on deploying the developed models onboard the UAV using a high-performance processor for real-time detection of powdery mildew for precision farming.
Automated tool for rapid data analysis of UAV-based remotely sensed data for field-based breeding programs
A. Dua, A. Sharda, W. T. Schapaugh, et al.
Plant breeders adopted proximal remote sensing platforms paired with transdisciplinary approaches to phenotype and quickly monitored agriculture crops. A vast amount of spatial-temporal data is now being collected on small research plots through advanced remote sensing platforms like Unmanned Aerial Vehicles (UAVs) and other precision agriculture technologies. However, data management tasks such as processing, analysis, and converting vast sensor data to informative data are still challenging for cultivar selection and management decisions because data management requires a lot of user intervention, time, and cost to evaluate phenotype records throughout time. Also, the remotely sensed phenotype data processing steps require adequate and advanced statistical techniques like machine learning and programming proficiencies for any downstream analysis. To address these needs, the best solution is to create a user-friendly and publicly accessible tool that can concurrently evaluate and generate informative data and advanced analysis. Therefore, the study was conducted to develop an automated tool to rapidly analyze and interpret voluminous spatial-temporal data, especially for phenotype and precision agriculture applications. The proposed method used a high-accuracy Real-Time Kinematic Global Navigation Satellite System (RTK-GNSS) from a planter, georeferenced multispectral UAV imagery, and other agronomic data. This tool generated plot boundaries, converted planter-logged RTK-GPS points into polygons representing each planted row and extracted spectral signatures from each plot within high-resolution imagery with minimal human intervention and interactive visualization. Then, it integrated data with agronomic information and performed advanced statistical analyses with rapid visualization and user interactions, producing instant maps and graphs. The resulting tool automatically generated maps, multi-polygon shapefiles, and CSV files of research plot boundaries with extracted features for any external software and further downstream analysis. Additionally, exploratory, descriptive, predictive analysis, and spatial-temporal visualization were performed on phenotype data and remotely sensed data. In the predictive analysis, Machine learning (ML) algorithms, such as Support Vector Regression (SVR), Random Forest (RF), XGBoost, Adaptive Boosting (AdaBoost), and K-Nearest Neighbors (KNN), were applied for capturing nonlinear relationships between phenomics predictor yield, spectral signatures. The texture features (GLCM), maturity and canopy coverage were also included as dependent variables to predict yield. The models were evaluated for accuracy using performance metrics such as R2, RMSE, MAE, and NRMSE. The results highlighted R6 and R7 as critical growth stages, with SVR emerging as the best-performing model, achieving an R2 of 0.76. Feature sets combining VI and GLCM improved prediction accuracy, emphasizing their complementary roles in capturing spectral and structural variations. With this study, a user-friendly tool, an open-source, efficient, adaptable, and reproducible automated solution was developed. This tool minimized time, user involvement and enabled rapid predictive analysis of phenotype data with visualizations and a user-friendly graphical interface.
UAVs and UGVs for Phenotyping and Precision Agriculture
icon_mobile_dropdown
Improving semantic segmentation through task adaptation for UAV hyperspectral agricultural imagery
Mazharul Hossain, Aaron Robinson, Lan Wang, et al.
Accurate crop mapping—identifying both the location and types of crops—is crucial for effective agricultural planning and informed decision-making. Advances in remote sensing, notably hyperspectral imagery from unmanned aerial vehicles (UAV), greatly enhance the efficiency and accuracy of crop mapping, reducing the reliance on traditional, labor-intensive field surveys. However, applying deep classifiers directly to hyperspectral data can lead to overfitting. Conversely, deep semantic segmentation models may struggle due to limited annotated hyperspectral imagery. To address this problem, we propose enhancing a U-Net-style model—originally trained on RGB imagery—by incorporating task adaptation, a custom loss function, and a spectral attention mechanism to better optimize it for hyperspectral data and improve crop mapping performance. Our proposed segmentation network achieved 76.6% accuracy and a 74.9% Dice score on a UAV-acquired hyperspectral agricultural dataset, which is competitive and well-rounded compared to other state-of-the-art methods while offering significantly lower computational complexity.
UAV-based sensing systems for agricultural optimization: focus on phenotyping and crop monitoring
Khaled Obaideen, Thomas French, Waleed Hilal, et al.
This paper presents a bibliometric analysis of UAV-based sensing for agricultural optimization, with a focus on phenotyping and crop monitoring from 2012 to 2024. Drawing on 2675 publications from 838 sources and exhibiting an annual growth rate of 41.5%, the field demonstrates rapidly expanding scholarly attention and technological innovation. Using VOSviewer and Biblioshiny, the study explores key concepts such as precision agriculture, remote sensing, advanced imaging (multispectral and hyperspectral), and machine learning algorithms. Results reveal four major thematic clusters: algorithmic and data-processing methods for phenotyping, application-oriented agriculture and sustainability concerns, UAV technology infrastructure with AI-based analytics, and spectral imaging systems for vegetation assessment. Cross-cluster linkages underscore the synergy between hardware developments, data-driven analytics, and agronomic applications. High citation rates suggest that this body of research has significant influence, shaping new insights into disease detection, yield prediction, and resource management. The findings highlight major trends, including the rise of deep learning, sensor fusion, and robotics, as well as ongoing challenges related to data standardization, validation protocols, and economic accessibility. By synthesizing these patterns, the paper offers a comprehensive overview of how UAV-based sensing is transforming large-scale phenotyping and crop monitoring, while pointing to strategic directions for future research and technological advancement.
Chimaera: a tethered UAV enhancement to proximal sensing carts and UGVs
Joseph B. Perry, Thomas P. Watson, Eddie Jacobs
Small Uncrewed Aerial Vehicles (sUAVs) are a commonly used tool for agricultural remote sensing due to their capability of carrying a variety of sensors including hyperspectral and LIDAR. Tethered Uncrewed Aerial Vehicles (tUAVs) offer theoretically infinite endurance while maintaining most of the flexibility of a UAV. Proximal sensing carts (PSCs) and Unmanned Ground Vehicles (UGVs) offer large payloads and increased endurance compared to UAVs. Combining UAVs and PSCs can be used to provide a flexible sensor mounting platform that is less dependent on the terrain, is flexible in height, and offers dynamic flexibility in positioning the sensors, all while increasing endurance. Presented here is a bolt-on solution for a marsupial tethered UAV designed to operate either in tandem with or independently from its mother vehicle. It is designed to work either as a part of a Manned-Unmanned Team (MUM-T) with a human powered cart or work autonomously with an UGV. The tradeoffs of the system are analyzed; such as safety as part of a MUM-T, increased weight of the mother vehicle, cost, and complexity.
Irradiance source comparison for FLD-based solar-induced fluorescence (SIF) retrieval using hyperspectral imagery
Angelin R. Favorito, Thomas P. Watson, Eddie L. Jacobs
Solar-Induced Fluorescence (SIF) is a type of fluorescence produced by plants as part of the photosynthetic process. Absorbed photosynthetically active radiation (PAR) not consumed during photosynthesis is re-released as fluorescence at longer wavelengths, with a peak at around 740nm. SIF is being pursued as a useful indicator of photosynthetic activity and plant physiology but is difficult to observe with traditional remote sensing methods due to high levels of background solar radiation. Existing methods of SIF retrieval typically involve two channels of data acquisition such as separate spectrometers or a combination of a spectrometer and a hyperspectral camera, which measure radiance and irradiance separately. The objective of this study is to review a method for SIF retrieval that uses a single push-broom hyperspectral camera to gather both upwelling and downwelling data. Irradiance is calculated from a reference panel present within the region of interest. The resulting SIF output is compared to SIF from the same data set paired with spectrometer-retrieved irradiance data. Calculations are conducted using the improved Fraunhofer Line-Depth (iFLD) method at the 760nm telluric O2 absorption band of the solar spectrum. Two sources sets of data were reviewed: data from a ground-level set up providing close range imagery, and aerial crop data retrieved from a UAV-mounted hyperspectral camera.
A survey on 3D reconstruction techniques in plant phenotyping: from classical methods to neural radiance fields (NeRF), 3D Gaussian splatting (3DGS), and beyond
Jiajia Li, Xinda Qi, Seyed H. Nabaei, et al.
Plant phenotyping is essential for advancing precision agriculture and crop improvement, and 3D reconstruction technologies offer powerful tools for capturing detailed plant morphology. This paper investigate the latest advancements in 3D reconstruction for plant phenotyping, focusing on traditional point cloud methods, Neural Radiance Fields (NeRF), and 3D Gaussian Splatting (3DGS). While point clouds are widely used for their simplicity, they face challenges with data density and noise. NeRF provides photorealistic reconstructions from sparse views but is computationally intensive. The novel 3DGS approach offers efficient, scalable representations using Gaussian primitives. This paper evaluates these methods, highlighting their strengths, limitations, and potential in automated phenotyping.
UGV-based Sensing for Phenotyping and Precision Agriculture I
icon_mobile_dropdown
Automated synthetic maize field for machine learning model development
Michael A. Mardikes, John T. Evans, Nathan C. Sprague
Maize, the number one produced crop in the United States, has been optimized for increasing yield at harvest over centuries. Researchers today are exploring the potential of a relationship between the planted seed to the leaf orientation. Uniform leaf orientation in a maize field could maximize the potential sunlight capture a maize field. Previous work has been done on developing tools for leaf orientation estimation in a limited capacity. A high-throughput, passive observation tool could further research of the potential seed orientation impact. As a maize plant’s ear tends to follow the leaf orientation, there is potential to scale out estimation capabilities by tracking maize ears. A deployable camera system with a machine learning (ML) model for object detection and ear angle estimation was proposed. Generating training data for maize has limitations due to the plant’s life cycle and the time required to repeatably hand measure plants that continuously change. A synthetic, scalable maize field procedurally constructed of photogrammetry scanned maize models with injected noise and a dynamic background environment was developed to mitigate these limitations. The synthetic maize field enabled year-round research and development of maize in a digital twin environment to produce synthetic data. The synthetic maize field environment was used to identify the capabilities of passively observing and estimating ear orientations with a camera placed onboard an unmanned ground vehicle. The camera’s placement was determined in simulation for maximizing passive observability of the ears. An automated synthetic data generation pipeline was developed to rapidly generate and label synthetic data for training ML models with all maize plant corresponding ground truth data. The tool was able to generate the required data for developing and evaluating various ML model types, which identified the best performing model for the real-world application.
Corn stalk diameter estimation using deep learning
Nathan C. Sprague, John T. Evans, Michael A. Mardikes
Accurately evaluating key characteristics of corn plants, such as stalk diameter, is essential for maintaining crop health and optimizing yield. Manual measurement of stalk diameter is labor-intensive and prone to human error. To overcome these limitations, this paper introduces a computer vision-based system that detects and estimates corn stalk diameters using images captured by a stereo camera. The system is designed to perform measurements throughout the corn lifecycle, including the preharvest phase. To improve accuracy during later growth stages when stalks are partially occluded by leaves, a novel segmentation method is used to isolate stalks and their edges for precise diameter estimation. In mid-summer testing, the model achieved a mean absolute error (MAE) of 1.5 mm (r2 = 0.901) relative to manual measurements. In fall, just before harvest, the MAE increased to 3.94 mm (r2 = 0.615), primarily due to nonuniform stalk shapes and challenges in obtaining consistent ground-truth measurements.
Testing DRIP-GPS: a simulation study on real-time precision irrigation with GNSS-R
Sriman Bidhan Baray, Md Mehedi Farhad, George Vellidis, et al.
Irrigation is a core component of crop production, and variable rate irrigation (VRI) can conserve up to 15% water, improve soil health, and boost yield. Current pivot-based dynamic VRI systems utilize a network of infield soil moisture (SM) sensors to create irrigation prescription maps, which allow differential water application across the field in different irrigation management zones (IMZs). These maps are generated every one to two days, making the SM prescription map static in temporal resolution, and the in-situ SM sensors offering only point data could limit the spatial resolution. Depending on the map generation time and irrigation period, these limitations could lead to areas being over-watered or under-watered across the field. On the other hand, Global Navigation Satellite System (GNSS) Reflectometry (GNSS-R) has shown great promise in measuring SM using remote sensing. They use reflected L-band GNSS signals that vary, depending on the moisture level in the top 5 cm of soil. However, the spatial resolution of spaceborne GNSS-R observations is very coarse (in the range of kilometers), limiting their application in precision agriculture (PA). To enable an efficient, high-spatiotemporal resolution dynamic VRI system, we present a simulator to evaluate the feasibility of using Dynamic Real-time Irrigation Planning using GNSS-R Pivot System (DRIP-GPS). The DRIP-GPS system deploys GNSS-R receivers on the pivot arms to estimate instantaneous surface SM measurements. We simulate the GNSS-R sensors on pivot arms for real-time surface SM estimation, calculating sensor positions, orientations, and the number of sensors needed for optimal coverage. The simulator evaluates the system’s spatial and temporal resolution, factoring in pivot speed, orientation, and satellite positions to assess its potential for high-precision irrigation in real-world conditions. Preliminary results from a field deployment are also presented, showing promising results.
PhenAI-bot: high-throughput 3D crop phenotyping of soybean (Glycine max) in greenhouse settings
Ivan Perez Olivera, Swarnabha Roy, Pappu K. Yadav, et al.
This study introduces PhenAI-bot, a newly developed robotic platform for high-throughput 3D phenotyping of soybean crops in greenhouse settings. The system utilizes a consumer-grade RGB-D camera (Intel RealSense D456) mounted on a mobile robot, addressing limitations of manual measurements that are time-consuming, error-prone, and often require destructive sampling. PhenAI-bot captures color and depth images of soybean canopies, aligning adjacent frames to reconstruct top-view perspective point clouds for comprehensive plant structure representation. Key phenotypic traits such as plant height, leaf area index, and canopy volume are extracted from these 3D point clouds. Results show that our approach is comparable to manual techniques but offers significant advantages in efficiency and precision. By integrating affordable RGB-D technology with robotics, PhenAI-bot presents a cost-effective, scalable solution for plant phenotyping in controlled environments. This research contributes to developing autonomous sensing systems for agriculture, meeting the growing demand for efficient crop monitoring tools.
UGV-based Sensing for Phenotyping and Precision Agriculture II
icon_mobile_dropdown
Enhanced AI-driven sensing and analytics platform for precision orchard management
Yiannis Ampatzidis, Shiyu Liu, Hengyue Guan, et al.
This study introduces an enhanced AI-driven sensing and analytics platform, Agrosense, for precision orchard management, addressing tasks like tree crop counting, canopy density classification, tree height estimation, and fruit counting. The upgraded system features a robust hardware architecture with RGB-D cameras for efficient tree imaging and optimized data storage. GPS and IMU data are processed by microcontrollers, while YOLOv8 enables precise detection of tree trunks, canopies, and gaps, achieving 98% tree counting accuracy and 99% canopy density classification accuracy. A novel fruit counting algorithm combines image processing with deep learning for accurate quantification. An intuitive platform stores tree images and metrics, aiding growers in monitoring productivity. Integrated with aerial sensing systems processed through AI-powered software, Agroview, the platform bridges ground and aerial data for actionable insights, including prescription maps. Extensively validated in citrus orchards, it is adaptable to other tree crops, providing a versatile solution for smart agriculture.
Investigating feature types for automated multiclass citrus peel disease detection
Quentin Frederick, Thomas Burks, Adam Watson, et al.
Huanglongbing (HLB; citrus greening) is an invasive diseases endemic to Florida citrus groves, causing yield loss, smaller fruit, blemishes, premature fruit drop and/or eventual tree death. Characterized by blotchy mottling of leaves and reduced fruit quality, HLB is caused by Candidatus Liberibacter asiaticus and affects all citrus cultivars. Currently, HLB can only be prevented by excluding the insect vector with a mesh screen. Greasy Spot is a fungal infection affecting leaves and fruit, also causing premature fruit drop and lowering fruit quality. Culling fruit affected by these diseases in the field reduces transportation and processing costs.

This study investigated the potential of machine vision for classifying fruit bearing symptoms of these diseases. Specifically, it tests the need for assessing the size of a fruit to phenotype it as HLB-symptomatic. Grapefruits exhibiting symptoms of HLB, greasy spot, and mechanical damage, as well as a control class, were collected and images with RGB cameras. By employing different preprocessing methods before model training, it was determined that the size and shape of the fruit were significant contributors to features for distinguishing HLB fruit from asymptomatic fruit, but that color and texture information permit better discrimination of greasy spot and wind scarred fruit from asymptomatic fruit and each other. This finding can help inform the development of at- or postharvest citrus fruit inspection systems, permitting more effective management of HLB.
Initial prototyping of a low-cost unoccupied ground vehicle platform for crop problem risk and severity mapping in agricultural fields
Chijioke Leonard Nkwocha, Abhilash Kumar Chandel
Adoption of precision agriculture has become essential for addressing the growing global food demand while minimizing environmental impacts. Unoccupied Ground Vehicles (UGVs) offer a promising solution for automating crop monitoring for problem detections, but their high-cost limits accessibility for small-scale farmers. This study aims to develop a lowcost UGV platform for real-time plant problem risk and severity mapping, focusing on affordability, accessibility, and durability. The UGV prototype was constructed using off-the-shelf remotely controlled toy car, navigation sensors, an ArduPilot Pixhawk flight controller, and M8N GPS module. Performance evaluations of operating speeds, vibration displacements, and battery consumption were conducted. The UGV achieved a maximum speed of 3.3 m/s, with speed increments plateauing beyond 60% throttle input. Vibration tests revealed significant z-axis displacements (up to 570 μm) on undulating terrains, which has the potential of impacting data quality. Battery consumption increased with speed, highlighting the need for separate power sources for high-power components. These results demonstrate the UGV's operational limits and provide insights for future optimization. The significance of this study lies in its potential to offer small-scale farmers an affordable tool for early crop problem detection, enabling timely interventions and reducing the need for excessive agrochemical protectant use. Future work will focus on integrating RGB camera, thermal camera, miniature weather sensors, edge-computing device, and RTK-GPS for enhanced data acquisition, real-time data processing and navigation in the agricultural fields. Extensive field testing will also be conducted to evaluate the UGV's performance under real-field conditions, with a focus on quantifying the effect of the UGV vibration on the quality of data captured and power management to ensure reliable operation in diverse agricultural environments.
Developing a 3D vision system with AI to detect harvestable spears towards selective asparagus harvesting
This conference presentation was prepared for SPIE Defense + Commercial Sensing 2025.
UGV-based Sensing for Phenotyping and Precision Agriculture III
icon_mobile_dropdown
Harvest-Bot: precision harvesting of pepper (Capsicum annuum L.) varieties in a greenhouse
Swarnabha Roy, Amee Parmar, Rishik Aggarwal, et al.
Efficient and precise harvesting of peppers requires advanced identification and handling techniques, which can be optimized using robotics and imaging technologies. This study presents the development of Harvest-Bot, an autonomous pepper-harvesting system utilizing the Yahboom ROSMASTER X3 robot. The robot is equipped with a robotic arm and a camera mounted on the end effector for enhanced orientation and precision in pepper plucking. Additionally, an Orbec Astra Pro RGB-D camera, integrated with Yolov8, is used for detecting pepper varieties and counting the number of peppers per plant. To ensure accurate manipulation of the robotic arm, Peter Corke’s Robotics Toolbox in Python was employed for forward and inverse kinematics calculations based on Denavit-Hartenberg (DH) parameters. This allows precise control of the robotic arm, facilitating effective harvesting by targeting the exact location of the peppers in 3D space. The dual-camera setup enhances both the orientation and variety detection capabilities, allowing the robot to make informed decisions during the harvesting process. Preliminary results show that the YOLOv8 model achieved a 70% accuracy in detecting pepper varieties and their orientations, while the Astra S Pro camera provided reliable 3D positional data for precise robotic movements. The integration of these technologies significantly improved the efficiency and precision of the harvesting process, reducing labor costs and increasing productivity in greenhouse environments. Future work will focus on refining the detection algorithms for improved accuracy, as well as expanding the system’s applicability to other crops, positioning Harvest-Bot as a versatile tool for agricultural automation.
PyBullet for kinematic and dynamic simulations of an agricultural robot for corn stem disease detection
Inayat Rasool, Iván P Olivera, Amee Parmar, et al.
Increasing demand for precision agriculture has encouraged the development of advanced robotic platforms capable of automating crop health monitoring. Traditional methods of detecting diseases, such as hollowed corn stems caused by infections, are manual, labor-intensive, and error-prone. To address this gap, we present a simulation of a four-wheeled agricultural robot, based on the Farm-NG Amiga platform, equipped with a 6DOF robotic manipulator designed to grasp corn stems and measure the applied force to determine if the stems are hollowed due to disease. By analyzing force feedback, the robot differentiates between healthy and diseased stems, offering a non-destructive method for early crop health assessment. Through kinematic and dynamic simulations in the PyBullet environment, we ensure precise positioning and effective torque control during grasping tasks. Future work will utilize the Orbit framework, powered by NVIDIA Isaac Sim, to enhance simulations with real-time haptic feedback and actuator dynamics while evaluating path planning and obstacle avoidance in 3D cornfield environments. After successful simulations, we plan to build the system and conduct field evaluations.
AI driven computer vision for detection and pose estimation of chile peppers (Capsicum annuum L.) for a robotic harvester
Amee Parmar, Inayat Rasool, Pappu Yadav
Chili pepper (Capsicum annuum L.) is a widely consumed vegetable crop worldwide. However, its production in the U.S. has been decreasing due to reduced acreage, yields and labor shortages. To enhance U.S. production of chili peppers, a robotic harvester (Rosmaster X3 Plus) equipped with AI can play critical roles. The objective of this study was to use a YOLOv8 AI model for detecting three varieties of chili peppers with their orientations that can be used by the Rosmaster X3 Plus for localization, manipulation and harvesting. The findings from this study suggest that the trained YOLOv8 model can be deployed on the Rosmaster X3 Plus for detection, localization, manipulation and harvesting of the three varieties of chili peppers in greenhouse settings. The future work will involve measurements of length and width of each pepper on plants including greenhouse trials of the Rosmaster X3 Plus for its dexterity, detachment and harvesting efficiencies.
PhenAI-Bot: precision 3D crop phenotyping of pepper (Capsicum annuum L.) varieties in a greenhouse
Swarnabha Roy, Amee Parmar, Rishik Aggarwal, et al.
Manual crop phenotyping is labor-intensive and time-consuming. The lack of affordable, open-access 3D phenotyping tools has hindered the study of dynamic 3D growth in crops across all growth stages. Plant height and leaf growth rates at different stages are critical indicators of plant health and overall yield. This study introduces an autonomous robot, PhenAI-Bot, designed to assess five phenotypic traits: plant height, canopy major and minor diameters, canopy area, and leaf count of four pepper varieties (Black Hungarian, Hungarian Hot Wax, Poblano, and Thai Hot; Capsicum annuum L.) grown in a greenhouse. The PhenAI-Bot uses an Intel RealSense D435i RGB-D camera mounted on an aluminum extrusion to capture top-down RGB, Depth, and 3D point cloud images of plant canopies. A secondary RGB camera scans QR codes on plant pots to autonomously retrieve variety and plant data. Six plants per variety, totaling 24 plants, were analyzed in this study. The collected data was compared with ground truth measurements, with Pearson’s correlation coefficient (R2) ranging from 0.015 for Hungarian Hot Wax to 0.91 for Poblano, and mean squared error (MSE) ranging from 0.004 for Hungarian Hot Wax to 0.097 for Thai Hot. The results show that PhenAI-Bot can accurately measure phenotypic traits at a moving speed of up to 0.2 m/s under standard daytime conditions. Future work will focus on incorporating AI algorithms to improve measurement accuracy and expanding the robot’s application to other crops, including wheat, soybean, corn, and sunflower.
Applications of UAV-based Sensing for Phenotyping and Precision Agriculture
icon_mobile_dropdown
Comparative analysis of feature selection techniques to identify a set of optimal features for crop yield estimation using UAS-based multisensor data
Mohammad Abdus Shahid Rafi, Volkan Senyurek, John E. Ball, et al.
Accurate crop yield estimation is essential for decision-making and planning in modern global agriculture, including in the USA, and is becoming increasingly critical for sustainable productivity to tackle climate change and food security challenges. Yield estimation provides farmers with insights into potential yields, enabling optimized resource allocation (water, fertilizer, pesticides, etc.), refined agricultural management strategies, and enhanced overall profitability. This research focused on feature selection for yield estimation of two of the top six crops in Mississippi and the US: corn and cotton. The study was conducted over a 2.31-hectare field encompassing both crop fields at the R. R. Foil Plant Science Research Center in Mississippi, US. The experiment took place over four consecutive years (2020 ∼ 2023) during the crop growth period, from late April to mid-October each year. Weekly data were collected utilizing multispectral cameras and LiDAR sensors mounted on unmanned aircraft systems (UAS). Soil moisture (SM) map and soil temperature data were obtained from volumetric soil moisture probes, while all weather and environmental parameters were sourced from a nearby field weather station. The dataset for each plot consisted of over 30 features across five major categories, with ground truth yield data collected for 235 plots (30 cotton and 30 corn plots) over the 4-year period. The investigation of feature selection techniques included Pearson’s correlation coefficient filtering, recursive feature elimination wrapping, and recursive groupwise wrapping, with the aim of identifying the most relevant features for yield estimation. For performance evaluation, crop yield prediction was conducted using a Long Short-Term Memory (LSTM) network, with root mean square error (RMSE) used as the performance metric. Among the methods, recursive groupwise wrapping produced comparatively lower RMSEs, indicating superior performance over selected weeks. Ultimately, the feature selection process enhances crop yield estimation accuracy compared to the usage of all available features.
Leveraging stacked generalization for peanut maturity mapping using aerial multispectral imagery and growing degree days
Sathish Raymond Emmanuel Sahayaraj, Abhilash K. Chandel, Maria Balota, et al.
Timely estimation of peanut (Arachis hypogaea L.) maturity is critical for precision harvest to achieve maximum yield and quality. Traditional methods are labor-intensive and subjective that require manually or through pressurized water removing of peanut pod’s outer layer followed by visualizing the color of inner layer for maturity grading. This study introduces a novel automated, non-contact type digital approach of mapping peanut maturity using small unoccupied aerial system-based multispectral imagery and weather data. Multi-temporal imagery data was collected at 1 cm/pixel resolution over three seasons (2022-2024) for Bailey-II, a commercial cultivar planted in experimental trials and grower farms. Five canopy reflectance and fifteen vegetation indices (VI) features were extracted for each flight campaign. Peanut samples were manually dug prior to each flight campaign which were then pressure-washed and graded to record ground truth information of Peanut Maturity Index (PMI). Aerial-imagery derived reflectance and VI features were fused with weather derived accumulated growing degree days (AGDD) until each flight campaign, in a stacked ensemble learning framework for PMI predictions. The framework consisted of three base learners that first predicted PMI separately using reflectance, VIs, and AGDD, respectively, and then a meta-learner that integrated predictions from the three base learners to produce a final PMI estimate. Among selected base learner models, support vector machine for spectral bands (test R2 = 0.51, RMSE = 0.14), neural network for VIs (test R2 = 0.31, RMSE = 0.17), and random forest for AGDD (test R2 = 0.71, RMSE = 0.11) were the best performers. Finally, the k-nearest neighbors was the best performing meta-learner model that yielded R2 of 0.75 and a root mean square error (RMSE) of 0.10 (<21%) when tested over a random independent dataset. This method yielded R2 of 0.89 and RMSE of 18% when evaluated for mapping over a grower field. Overall, the novel method described in this study rapidly maps peanut maturity at field-scale with minimal human intervention, thereby improving decision making for precision harvest and maximum productivity.
Machine learning models to detect strawberry plant health from UAVs for real-time applications
Jahin Mahbub, Subodh Bhandari, Amar Raheja, et al.
The intersection of artificial intelligence (AI), machine learning (ML), and precision agriculture has unlocked unprecedented opportunities for sustainable and efficient farming. Leveraging these technologies allows for analyzing extensive agricultural data to monitor plant health, predict yields, and implement data-driven interventions. This paper presents a real-time, machine learning-based framework developed for assessing the health of strawberry plants using RGB imagery collected via UAVs. Advanced object detection models including YOLOv11 and Faster R-CNN with an SSD MobileNet V3 backbone were trained on labeled RGB images and will be deployed on an Nvidia Jetson Nano for, on-site, instantaneous evaluation. The framework detects individual plants, classifies their health based on visual patterns, and enables edge-based inference without the need for external computational infrastructure. Results revealed that Faster R-CNN achieved a superior F1 score of 81.59%, while YOLOv11-Large (YOLOv11l) reached 56.22%, demonstrating the practical applicability and effectiveness of real-time RGB-based monitoring in precision agriculture.
Faba bean crop plant identification using aerial multispectral imagery and convolutional neural network-based deep learning models
Pius Jjagwe, Abhilash K. Chandel, Maria Balota, et al.
Faba bean (Vicia faba L.) is an important legume crop, offering benefits as a food source for humans and animals as well as a cover crop. For these reasons, it is envisioned to be a potential winter crop for the Mid-Atlantic region of the USA. To accelerate faba bean breeding programs in the Mid-Atlantic, traditional methods for phenotyping could be highly laborintensive and spatially subjective. Therefore, it is vital to develop high-throughput phenotyping methods for accurate and accelerated characterization of the faba bean crops. This study is aimed at developing an automated faba bean crop plant identification technique to aid in counting plant stands, critical for biomass and yield estimations. High resolution multispectral images were acquired using a small unoccupied aircraft system (SUAS) flown over one location (96) faba bean breeding trials from Oct through Dec 2024. Four convolutional neural network-based deep learning (DL) models, YOLOv10x, YOLO11x, YOLO12x, YOLO12n, were formulated, trained, and tuned for processing the acquired images to identify faba bean plants. The performance of the four models was evaluated at two epochs (50,100) and train-validation data split configurations (70:30, 80:20). YOLOv10x performed best at 50 epochs and 80:20 split while YOLO12x performed best at 100 epochs and 70:30 split. Across all models, performance improved with an increase from 70:30 to 80:20 split. Based on mAP50 at 100 epochs and 80:20 split, YOLO12n performed best, followed by YOLOv10x, YOLO11x and YOLO12x. YOLO12n showed the best and consistent accuracy for faba bean stand count estimation, showing potential for automated phenotyping of the crop.
Poster Session
icon_mobile_dropdown
Branched broomrape detection in tomato farms using satellite imagery and time-series analysis
Mohammadreza Narimani, Alireza Pourreza, Ali Moghimi, et al.
Branched broomrape (Phelipanche ramosa (L.) Pomel) is a chlorophyll-deficient parasitic plant that seriously threatens tomato production by extracting essential nutrients from its host, potentially reducing yields by up to 80%. Its primarily subterranean lifecycle and ability to produce over 200,000 seeds per plant—with seeds remaining viable in the soil for up to 20 years—make early detection essential for effective management. This study presents an end-to-end pipeline that utilizes Sentinel-2 satellite imagery and time-series analysis to identify broomrape-infested tomato fields in California. Regions of interest were defined using farmer-reported infestations, and satellite images with less than 10% cloud cover were selected. Twelve spectral bands were downloaded alongside sun-sensor geometry metadata, and 20 spectral vegetation indices (e.g., Normalized Difference Vegetation Index, Normalized Difference Moisture Index) were calculated. Additionally, five plant traits (e.g., Leaf Area Index, Canopy Chlorophyll Content) were derived using a neural network model calibrated with both ground truth and synthetic data. Canopy Chlorophyll Content trends were used to delineate transplanting-to-harvest periods, with phenological stages aligned using growing degree days derived from local weather data. Vegetation pixels were identified by masking out non-vegetative areas and then used to train a Long Short- Term Memory (LSTM) network on 18,874 pixels across 48 GDD time points. The model achieved 88% training and 87% test accuracy, with precision, recall, and F1 scores of 0.86, 0.92, and 0.89, respectively. Permutation feature importance analysis identified Normalized Difference Moisture Index, Canopy Chlorophyll Content, Fraction of Absorbed Photosynthetically Active Radiation, and Chlorophyll Red-Edge as the most informative features, consistent with the physiological effects of broomrape infestation on water and chlorophyll content. Our results demonstrate the potential of satellite-driven time-series modeling for detecting parasitic stress in tomato farms.
Leaf spectral reflectance prediction using multihead attention neural networks
Parastoo Farajpoor, Alireza Pourreza, Mohammadreza Narimani, et al.
Accurate modeling of leaf spectral reflectance from physiological and biochemical traits is essential for advancing remote sensing applications in plant science and precision agriculture. Widely used radiative transfer models, such as PROSPECTPRO, rely on generalized trait–reflectance relationships developed from a wide range of species, which may not fully capture the spectral behavior of specific crops like grapevines. In this study, we developed a trait-to-spectra prediction model using a multi-head attention neural network trained on a grapevine-specific dataset that includes 16 leaf traits measured across multiple varieties, growth stages, and years. The model was evaluated using stratified 5-fold cross-validation and achieved an average coefficient of determination (R2) of 0.84 and normalized root mean squared error (NRMSE) of 1.52%, demonstrating high accuracy and generalizability. When compared to PROSPECT-PRO in forward mode, the neural network exhibited lower mean absolute error (MAE), especially in the near-infrared (NIR) and shortwave-infrared (SWIR) regions. These results emphasize the importance of species-specific modeling approaches and show that integrating biochemical and structural traits into data-driven architectures can significantly improve spectral prediction. The proposed model provides a robust framework for generating accurate leaf-level reflectance data, with potential applications in canopy trait retrieval, vineyard monitoring, and remote sensing-driven crop management.