Agriculture plays a key role in ensuring food security and economic growth. However, farmers often struggle with choosing the right crops, predicting yield, managing resources, and spotting plant diseases. Traditional farming depends on manual decision-making and limited data analysis. This can result in lower productivity and wasted resources. To solve these problems, this paper proposes an artificial intelligence (AI)-based advisory system for fruits and vegetables. This system combines crop recommendations, yield predictions, disease detection, and resource management. The proposed system uses agricultural factors like soil nutrients (NPK values, pH), temperature, rainfall, humidity, and past crop data. A fuzzified recurrent neural network (FRNN) is used for crop recommendations since it manages uncertainty and time patterns in agricultural data effectively. The model examines input conditions and suggests the best crops for farming. Additionally, regression methods are used for yield prediction to estimate expected production. To assist farmers during the growing phase, the system includes an image-based module for disease detection. Plant images are prepared using techniques like resizing, noise reduction, and contrast improvement for better quality. A convolutional neural network (CNN) then accurately identifies plant diseases and provides treatment suggestions, including fertilizers and preventive actions.
Moreover, the system recommends the best way to allocate resources like water, fertilizers, and land to enhance efficiency and sustainability. A user-friendly web interface created with Flask allows farmers to easily engage with the system and access recommendations. The proposed framework supports data-driven decision-making at every stage of the crop lifecycle. This helps increase productivity, reduce losses, and foster sustainable farming practices.
Introduction
The text presents an AI-based smart agriculture system designed to improve crop selection, resource management, and plant disease detection using machine learning and deep learning techniques.
Agriculture faces challenges due to unpredictable environmental conditions like soil nutrients, temperature, rainfall, and humidity. Traditional farming methods are not sufficient for handling such complex and changing data, often leading to poor decisions and reduced productivity. To solve this, the study proposes a data-driven system using AI.
The core model is a Fuzzified Recurrent Neural Network (FRNN), which combines fuzzy logic with Recurrent Neural Network to handle uncertainty and time-based patterns in agricultural data. Preprocessing techniques like Min-Max normalization are used to standardize input data, while methods such as ReLU activation, Softmax, and Mini-Batch SGD improve learning performance and accuracy. The system provides Top-3 crop recommendations based on environmental conditions.
In addition to crop prediction, the system includes a disease detection module using image processing. Plant leaf images are enhanced and analyzed using a Convolutional Neural Network (CNN) to identify diseases based on patterns like color and texture.
The system is implemented as a full-stack web application using React.js for the frontend and Node.js for the backend, making it scalable and user-friendly.
The literature review highlights previous work using machine learning, fuzzy logic, and deep learning in agriculture but identifies limitations such as lack of uncertainty handling, limited integration of multiple tasks, and high computational complexity. These gaps motivate the proposed FRNN-based hybrid system.
Overall, the system integrates crop recommendation, resource optimization, and disease detection into a single intelligent framework, improving agricultural productivity and decision-making.
Conclusion
In this study, an intelligent agricultural advisory system based on FRNN and CNN models was developed and thoroughly evaluated. By integrating both environmental data and plant leaf image analysis, the proposed system achieves superior performance compared to individual FRNN-only and CNN-only models. Experimental results demonstrate high overall accuracy (93.1%), precision (91.8%), recall (94.0%), F1-score (92.9%), and AUC (0.96), confirming the effectiveness of combining multi-modal inputs for agricultural decision-making. Confusion matrix analysis and visual evaluation further validate the model’s ability to accurately recommend suitable crops and detect plant diseases while minimizing false predictions, highlighting its practical applicability in real-world farming. The study also shows that combining environmental parameters with image-based features significantly improves prediction performance, especially in complex or uncertain agricultural conditions where a single data source may not be sufficient. The system demonstrates strong generalization capability on unseen data and provides reliable outputs that can assist farmers in making informed decisions, ultimately improving crop yield and reducing losses.
Looking forward, the proposed system can be extended to support additional agricultural functionalities such as fertilizer recommendation, irrigation planning, and crop growth monitoring. Incorporating real-time data from IoT devices and satellite imagery can further enhance prediction accuracy and enable dynamic decision support. The integration of explainable AI techniques can also improve transparency by providing clear insights into how recommendations are generated, increasing user trust.
Furthermore, deploying the system through mobile applications or cloud-based platforms can make it easily accessible to farmers, including those in remote and resource-limited areas. This work establishes a robust and scalable framework for smart agriculture, demonstrating how multi-modal AI models can transform traditional farming practices. Overall, the proposed system highlights the potential of intelligent technologies to support precision agriculture, improve productivity, and promote sustainable farming for the future.
References
[1] T. K. Sajja, R. M. Devarapalli, and H. K. Kalluri, “Lung cancer detection based on CT scan images using deep transfer learning,” Traitement du Signal, 2019.
[2] M. A. Thanoon et al., “A review of deep learning techniques for lung cancer screening and diagnosis based on CT images,” Diagnostics, 2023.
[3] I. Shafi et al., “An effective method for lung cancer diagnosis from CT scan using deep learning-based support vector network,” Cancers, 2022.
[4] J. L. Causey et al., “Lung cancer screening with low-dose CT scans using a deep learning approach,” arXiv preprint, 2019.
[5] O. Ozdemir et al., “A 3D probabilistic deep learning system for detection and diagnosis of lung cancer using low-dose CT scans,” arXiv preprint, 2019.
[6] M. Mamun et al., “LCDctCNN: Lung cancer diagnosis of CT scan images using CNN-based model,” arXiv preprint, 2023.
[7] E. S. Pour and M. Esmaeili, “Lung cancer detection from CT scan images using genetic-independent recurrent deep learning,” arXiv preprint, 2023.
[8] D. Ardila et al., “End-to-end lung cancer screening with 3D deep learning on low-dose CT,” Nature Medicine, 2019.
[9] A. A. Aidence et al., “Deep learning for lung cancer detection on screening CT scans,” Radiology, 2021.
[10] H. Shin et al., “Deep convolutional neural networks for computer-aided detection: CNN architectures for medical image analysis,” IEEE TMI, 2016.
[11] K. He et al., “Deep residual learning for image recognition,” IEEE CVPR, 2016.
[12] G. Huang et al., “Densely connected convolutional networks,” IEEE CVPR, 2017.
[13] A. Krizhevsky et al., “ImageNet classification with deep convolutional neural networks,” NIPS, 2012.
[14] F. Ciompi et al., “Automatic classification of pulmonary nodules using deep learning,” IEEE TMI, 2017.
[15] S. Setio et al., “Pulmonary nodule detection in CT images using multi-view CNN,” Medical Image Analysis, 2016.
[16] W. Shen et al., “Multi-scale convolutional neural networks for lung nodule classification,” IPMI, 2015.
[17] Q. Dou et al., “Automated pulmonary nodule detection via 3D CNN,” MICCAI, 2017.
[18] X. Zhu et al., “Deep learning for lung cancer diagnosis using CT imaging,” IEEE Access, 2018.
[19] Y. Xie et al., “Knowledge-based collaborative deep learning for benign-malignant lung nodule classification,” IEEE TMI, 2019.
[20] H. Jiang et al., “Lung nodule classification using deep CNN and handcrafted features,” Pattern Recognition, 2018.
[21] L. Shen et al., “Multi-crop CNN for lung nodule malignancy prediction,” IEEE EMBC, 2017.
[22] A. Anthimopoulos et al., “Lung pattern classification using deep CNN,” IEEE TMI, 2016.
[23] P. Lakhani and B. Sundaram, “Deep learning at chest radiography: automated classification of pulmonary diseases,” Radiology, 2017.
[24] S. Hussein et al., “Risk stratification of lung nodules using 3D CNN,” IEEE ICIP, 2017.
[25] Y. Song et al., “Deep learning enables fast detection of lung cancer,” IEEE Access, 2020.
[26] M. Nishio et al., “Computer-aided diagnosis of lung cancer using deep learning,” Academic Radiology, 2018.
[27] H. Nam et al., “Deep learning-based lung cancer screening using chest CT,” Scientific Reports, 2022.
[28] S. Armato et al., “LUNA16 challenge: Lung nodule analysis,” Medical Image Analysis, 2016.
[29] Kaggle, “Data Science Bowl 2017: Lung cancer detection challenge,” 2017.
[30] Dou, Q., Chen, HX. Wang et al., “ChestX-ray8: Hospital-scale chest X-ray database,” IEEE CVPR, 2017.
[31] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, 2015.
[32] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, 2015.
[33] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT Press, 2016.
[34] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, 1997.
[35] K. Cho et al., “Learning phrase representations using RNN encoder-decoder,” EMNLP, 2014.
[36] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” ICLR, 2015.
[37] L. Breiman, “Random forests,” Machine Learning, 2001.
[38] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, 1995.
[39] R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation,” IJCAI, 1995.
[40] T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Transactions on Information Theory, 1967.
[41] S. Ramesh and D. Vydeki, “Recognition and classification of paddy leaf diseases using deep neural networks,” Computers and Electronics in Agriculture, 2020.
[42] P. Mohanty, D. Hughes, and M. Salathé, “Using deep learning for image-based plant disease detection,” Frontiers in Plant Science, 2016.
[43] A. Kamilaris and F. X. Prenafeta-Boldú, “Deep learning in agriculture: A survey,” Computers and Electronics in Agriculture, 2018.
[44] J. G. A. Barbedo, “Impact of dataset size and variety on plant disease classification,” Computers and Electronics in Agriculture, 2018.
[45] S. Sladojevic et al., “Deep neural networks for plant disease recognition,” Computational Intelligence and Neuroscience, 2016.
[46] K. P. Ferentinos, “Deep learning models for plant disease detection,” Computers and Electronics in Agriculture, 2018.
[47] M. Too et al., “A comparative study of fine-tuning deep learning models for plant disease identification,” Computers and Electronics in Agriculture, 2019.
[48] H. Durmu? et al., “Disease detection on plant leaves using deep learning,” IEEE Signal Processing and Communications Applications, 2017.
[49] R. Picon et al., “Deep convolutional neural networks for mobile capture device-based crop disease classification,” Computers and Electronics in Agriculture, 2019.
[50] S. Brahimi et al., “Deep learning for tomato diseases classification,” Computers and Electronics in Agriculture, 2017.
.