High-throughput plant phenotyping (HTP) has emerged as a crucial element of modern plant science and crop breeding, enabling the swift, large-scale, and non-invasive evaluation of plant characteristics. The integration of machine learning (ML), particularly deep learning approaches, has transformed HTP by automating feature extraction, improving predictive accuracy, and enabling thorough analysis under diverse environmental conditions. This review compiles the latest advancements in ML-based phenotyping, covering traditional algorithms, convolutional neural networks, transformer models, and innovative 3D reconstruction techniques. It investigates advanced phenotyping technologies, encompassing controlled-environment systems, field robotics, and UAV-driven imaging, along with novel instruments such as ChronoRoot 2.0 and PhenoAssistant. The conversation includes uses in yield forecasting, stress identification, trait measurement, and weed differentiation. Additionally, it examines significant issues like dataset constraints, interpretability of models, and scalability, while suggesting future paths that involve multimodal integration, open data standards, explainable AI, and affordable phenotyping methods. This synthesis is designed to help researchers leverage ML for phenotyping processes, thereby promoting precision agriculture and accelerating breeding initiatives
Introduction
The world faces the dual challenges of a rapidly growing population—projected to reach 10 billion by 2050—and climate change, which increases abiotic stresses like drought and heat that threaten crop production. Meeting the consequent need for a 35-56% increase in food production requires developing resilient crop varieties capable of thriving under such stresses.
Traditional plant breeding methods, often conducted under optimal conditions, struggle to identify complex stress-tolerance traits governed by multiple genes and genotype-by-environment interactions. Moreover, phenotyping—the measurement of plant traits—has been a major bottleneck due to its labor-intensive and slow nature, unable to keep pace with advances in genotyping technologies. Even with high-throughput phenotyping (HTPP) tools, the vast and complex data generated create challenges in analysis.
HTPP, employing diverse sensing platforms (ground-based systems, UAVs, satellites) and sensors (RGB, multispectral, thermal, LiDAR, fluorescence), enables rapid, non-invasive, and large-scale plant trait measurement. The effective use of HTPP depends heavily on machine learning (ML) and deep learning to analyze the massive, multi-dimensional datasets, extracting meaningful features for classification, prediction, and decision-making in plant breeding.
Machine learning paradigms used include supervised learning (for classification and regression), unsupervised learning (for clustering and pattern discovery), semi-supervised learning, and reinforcement learning (less common, with potential in robotics). Modern HTPP increasingly integrates multi-omics data (genomics, transcriptomics, proteomics, metabolomics) with environmental and phenotypic data to model complex genotype-by-environment interactions at a systems biology level.
Classical ML algorithms such as Support Vector Machines (SVMs), Random Forests (RFs), and clustering methods remain widely used in HTPP for tasks like stress phenotyping, disease detection, and plant segmentation, while deep learning methods automate feature extraction and improve accuracy.
The literature emphasizes that for global adoption, especially in less-developed countries, HTPP tools must be cost-effective and user-friendly. Collaboration across plant breeders, climate scientists, and crop modelers is essential to address climate challenges and optimize breeding strategies. Advances in big data and crop growth modeling facilitate precise selection of cultivars suited to specific environments, while standardization in data handling and sharing is critical for public breeding networks.
Case studies, such as barley drought tolerance mapping using HTPP, illustrate the identification of quantitative trait loci (QTLs) associated with improved stress responses, highlighting the practical breeding benefits of integrating phenotyping, genotyping, and ML analyses.
Conclusion
Across the reviewed studies, it is evident that modern computer vision and deep learning methods are revolutionizing agricultural monitoring, yield estimation, and quality assessment. Applications such as fruit detection and counting, wheat ear density estimation, citrus tree canopy mapping, and hyperspectral pesticide residue detection consistently demonstrate that AI-driven approaches can outperform traditional manual and statistical methods in both accuracy and efficiency. Techniques ranging from object detection (e.g., Faster R-CNN) and semantic segmentation to specialized architectures (e.g., TasselNet, AlexNet) have proven effective under diverse conditions, with accuracies often exceeding 90% and, in some cases, achieving near-perfect detection. Furthermore, UAV-based multispectral and RGB imaging platforms provide scalable, high-throughput data acquisition, enabling large-scale phenotyping and orchard management with minimal human labor. While certain challenges remain—such as robustness under varying environmental conditions, spectral similarity between classes, and the need for extensive labeled datasets—the integration of optimized preprocessing methods (e.g., Otsu segmentation, NDVI-based canopy delineation) and model-specific improvements can mitigate these limitations. Collectively, these findings reinforce the role of AI-powered image analysis as a transformative tool for precision agriculture. By enabling accurate, rapid, and non-destructive assessment of crop traits, these technologies not only enhance decision-making for breeders and growers but also contribute to sustainability by optimizing resource use and reducing waste. Future research should focus on improving model generalizability, automating pipeline integration, and extending applicability across crops, geographies, and environmental scenarios.
References
[1] J. L. Araus, S. C. Kefauver, M. Zaman-Allah, M. S. Olsen, and J. E. Cairns, “Translating High-Throughput Phenotyping into Genetic Gain”, Accessed: Aug. 11, 2025. [Online]. Available: https://www.cell.com/trends/plant-science/abstract/S1360-1385(18)30020-7
[2] N. Honsdorf, T. J. March, B. Berger, M. Tester, and K. Pillen, “High-Throughput Phenotyping to Detect Drought Tolerance QTL in Wild Barley Introgression Lines”, doi: 10.1371/journal.pone.0097047.
[3] W. E. Clarke et al., “A high-density SNP genotyping array for Brassica napus and its ancestral diploid species based on optimised selection of single-locus markers in the allotetraploid genome,” Theor. Appl. Genet., vol. 129, no. 10, pp. 1887–1899, June 2016, doi: 10.1007/s00122-016-2746-7.
[4] H. O. Awika et al., “Developing Growth-Associated Molecular Markers Via High-Throughput Phenotyping in Spinach,” Plant Genome, vol. 12, no. 3, p. 190027, 2019, doi: 10.3835/plantgenome2019.03.0027.
[5] H. S. Baweja, T. Parhar, O. Mirbod, and S. Nuske, “StalkNet: A Deep Learning Pipeline for High-Throughput Measurement of Plant Stalk Count and Stalk Width,” 2017. doi: 10.1007/978-3-319-67361-5_18.
[6] S. Das Choudhury, A. Samal, and T. Awada, “Frontiers | Leveraging Image Analysis for High-Throughput Plant Phenotyping”, doi: 10.3389/fpls.2019.00508.
[7] Y. Jiang and C. Li, “Convolutional Neural Networks for Image-Based High-Throughput Plant Phenotyping: A Review,” Plant Phenomics, vol. 2020, p. 4152816, Jan. 2020, doi: 10.34133/2020/4152816.
[8] N. Häni, P. Roy, and V. Isler, “A comparative study of fruit detection and counting methods for yield mapping in apple orchards,” J. Field Robot., vol. 37, no. 2, pp. 263–282, 2020, doi: 10.1002/rob.21902.
[9] S. Madec et al., “Ear density estimation from high resolution RGB imagery using deep learning technique,” Agric. For. Meteorol., vol. 264, pp. 225–234, Jan. 2019, doi: 10.1016/j.agrformet.2018.10.013.
[10] “UAV-Based High Throughput Phenotyping in Citrus Utilizing Multispectral Imaging and Artificial Intelligence.” Accessed: Aug. 11, 2025. [Online]. Available: https://www.mdpi.com/2072-4292/11/4/410
[11] B. Jiang et al., “Fusion of machine vision technology and AlexNet-CNNs deep learning network for the detection of postharvest apple pesticide residues,” Artif. Intell. Agric., vol. 1, pp. 1–8, Mar. 2019, doi: 10.1016/j.aiia.2019.02.001.