Correct crop identification is vital in agriculture for planning, when dealing with loans and for state oversight. Frequently, the particulars of crops are checked via inspection done by hand, which is slow and can be inaccurate. This study details a crop classification system which employs satellite pictures and deep learning models. A dataset – made up of satellite pictures taken from Google Earth, with one hundred pictures of rice, wheat, sugarcane, coconut and land with no plants – was labelled. In MATLAB, this dataset was put through training using transfer learning, and pre-set models like ResNet50, ResNet101, mobileNetv2, SqueezeNet, GoogleNet, inceptionV3, Resnet18.The models’ performance was assessed by means of classification accuracy, and analysis of a confusion matrix. A straightforward land database was additionally constructed to link survey numbers to the satellite picture, and the crop type that was forecast. The findings demonstrate that deep learning is able to categorise crops well, from satellite pictures; and the system could be made better by enlarging the dataset and making use of data from several seasons.
Introduction
This study proposes a satellite-image-based crop classification and verification system to improve agricultural monitoring and land-use validation. Accurate crop identification is essential for government agencies, banks, and insurance companies, yet traditional field inspections are labor-intensive, time-consuming, and impractical for large-scale agricultural regions. The proposed system leverages deep learning and remote sensing to automate crop detection and link predictions with official land survey records.
The system utilizes satellite images collected from Google Earth and classifies five crop categories—rice, wheat, sugarcane, coconut, and barren land—using transfer learning with pre-trained convolutional neural networks (CNNs). Multiple architectures were evaluated, including ResNet-18, ResNet-50, ResNet-101, MobileNetV2, InceptionV3, GoogLeNet, and SqueezeNet. After comparative experimentation using optimizers such as Adam, SGDM, and RMSprop, ResNet-101 with the Adam optimizer (learning rate = 0.001, 25 epochs) achieved the highest classification accuracy and balanced precision–recall performance.
The proposed architecture consists of four main modules:
User input (survey number and satellite image upload),
Crop classification using a fine-tuned CNN model,
Database verification linking survey numbers to registered crop details, and
Result generation (“Pass” if predicted and registered crops match, “Fake” otherwise).
A structured land database was created to store survey number, coordinates, crop type, and image references. A MATLAB-based GUI enables user interaction for uploading images and retrieving classification results.
Performance evaluation employed standard metrics including Accuracy, Precision, Recall, and F1-score derived from confusion matrices. Results demonstrated high overall accuracy, with minor misclassification occurring between spectrally similar crops. Barren land was identified with particularly high reliability.
For real-world deployment, the framework can integrate with digital land records and satellite data sources such as Landsat and Sentinel-2, applying preprocessing steps like cloud filtering, band stacking, normalization, and vegetation-index computation. The system enables automated, scalable crop verification without physical field visits, supporting agricultural governance, financial verification, and insurance claim validation.
Overall, the study demonstrates the feasibility of combining deep learning with structured land databases to create an intelligent, survey-linked crop verification mechanism that enhances efficiency, transparency, and scalability in agricultural monitoring.
Conclusion
The proposed system infers that satellite image usage in conjunction with AI technology and classification is instrumental in the detection of crop patterns and verification of potential crop mismatch using the survey number approach. The combined usage of image processing and machine learning algorithms in the model makes it a more organized and mechanized approach toward the verification of agricultural land. The evaluation results confirm the feasibility of the proposed system, as it increases the accuracy, precision, recall, and F1-score values of the model for effective usage in the verification process. For the future, this system can be improved by incorporating the use of multi-temporal satellite image time series to identify crop variations seasonally. The usage of high-resolution images and deep learning models like attention-based networks can improve the accuracy of the system. The inclusion of real-time satellite API services and government land record database services can make this system fully automated. Moreover, the extension of this model to detect crop health, yield estimation, and crop anomalies can make this system more valuable for agricultural management systems and for verifying financial systems.
References
[1] N. Kussul, M. Lavreniuk, S. Skakun, and A. Shelestov, “Deep learning classification of land cover and crop types using remote sensing data,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 778–782, Mar. 2017. DOI: 10.1109/LGRS.2017.2681128
[2] J. Yao, J. Wu, C. Xiao, Z. Zhang, and J. Li, “The Classification Method Study of Crops Remote Sensing with Deep Learning, Machine Learning, and Google Earth Engine,” Remote Sensing, vol. 14, no. 12, p. 2758, Jun. 2022. DOI: 10.3390/rs14122758
[3] L. Wang, J. Wang, Z. Liu, J. Zhu, and F. Qin, “Evaluation of a deep-learning model for multispectral remote sensing of land use and crop classification,” The Crop Journal, vol. 10, no. 5, pp. 1435–1451, Feb. 2022. DOI: 10.1016/j.cj.2022.01.009
[4] R. Tufail, P. Tassinari, and D. Torreggiani, “Deep learning applications for crop mapping using Multi-Temporal Sentinel-2 data and Red-Edge vegetation indices: integrating convolutional and recurrent neural networks,” Remote Sensing, vol. 17, no. 18, p. 3207, Sep. 2025. DOI: 10.3390/rs17183207
[5] K. Heupel, D. Spengler, and S. Itzerott, “A progressive Crop-Type classification using multitemporal remote sensing data and phenological information,” PFG – Journal of Photogrammetry Remote Sensing and Geoinformation Science, vol. 86, no. 2, pp. 53–69, Apr. 2018. DOI: 10.1007/s41064-018-0050
[6] Ayushi and P. K. Buttar, “Satellite imagery analysis for crop type segmentation using U-Net architecture,” Procedia Computer Science, vol. 235, pp. 3418–3427, Jan. 2024. DOI: 10.1016/j.procs.2024.04.322
[7] T. Qu, H. Wang, X. Li, D. Luo, Y. Yang, J. Liu, and Y. Zhang, “A fine crop classification model based on multitemporal Sentinel-2 images,” International Journal of Applied Earth Observation and Geoinformation, vol. 134, p. 104172, Sep. 2024. DOI: 10.1016/j.jag.2024.104172
[8] E. Donmez, T. Heckelei, and H. Storm, “Satellite remote sensing-based crop cover classification over Europe: accuracy of different methodological approaches,” International Journal of Remote Sensing, vol. 46, no. 21, pp. 8251–8294, Oct. 2025. DOI: 10.1080/01431161.2025.2565837