Authors: Sahil Bhujbal, Pradnya Mandale , Vaishnavi Aher, Rushikesh Wable
DOI Link: https://doi.org/10.22214/ijraset.2023.47611
Certificate: View Certificate
With the continuous integration of computer technology into agricultural production, it also reduces personnel costs while improving agricultural production efficiency and quality. Crop disease control is an important part of agricultural production, and the use of computer vision technology to quickly and accurately identify crop diseases is an important means of ensuring a good harvest of agricultural products and promoting agricultural modernization. In this paper, a recognition method based on deep learning is proposed based on soybean brown spot. The method is divided into image pre- treatment and disease identification. Based on traditional threshold segmentation, the pre-processing process first uses the HSI colour space to filter the information of the normal area of the leaf, adopts OTSU to set the threshold to segment the original image under the Lab colour space, and then merges the segmented images. The final spot segmentation image is obtained. Compared with the renderings of several other commonly used methods of segmentation, this method can better separate the lesions from the leaves. In terms of disease identification, in order to adapt to the working conditions of large samples of farmland operations, a convolutional neural network (CNN) of continuous convolutional layers was constructed with the help of Caffe to extract more advanced features of the image. In the selection of activation functions, this paper selects the Max out unit with stronger fitting ability, and in order to reduce the parameters in the network and prevent the network from overfitting, the sparse Max out unit is used, which effectively improves the performance of the Max out convolutional neural network. The experimental results show that the algorithm is superior to the algorithm based on ordinary convolutional neural network in identifying large sample crop diseases.
Disease is one of the main factors a?ecting soybean yields. e occurrence of leaf diseases is easy to cause changes in leaf color or even leaf shedding, and a large number of leaf shedding will reduce the resistance of plants to diseases, which in turn will lead to yield reduction or quality degradation . e traditional identi?cation of crop diseases mainly relies on the experience accumulated by farmers in the agricultural production process of successive generations to judge, which has extremely high professional knowledge requirements for agricultural producers. However, many agricultural personnel do not have a comprehensive knowledge of disease control, and when judging crop diseases, they often only observe, which inevitably produces errors, thus hindering the timely treatment of crops. With the continuous improvement of the level of computer hardware and the rapid growth of computing speed, pattern recognition and artificial intelligence have developed rapidly on this basis; image processing and machine learning technology has become more mature and has begun to enter people’s production and life, providing great convenience for people’s daily labour [2–4]. In recent years, there have been an increasing number of studies on crop disease identification. Mishra et al.  developed a maize leaf disease identification classification system that was developed using Intel’s latest Neuron Compute Stick pretrained Movidius of deep CNN model and deployed on a Raspberry Pi 3. e model achieved an accuracy rate of 88.46% in identifying maize leaf diseases. Kumar et al.  adopted K-means segmentation and Hindawi Mobile Information Systems Volume 2022, Article ID 1952936, 8 pages https://doi.org/10.1155/2022/1952936 multiclass support vector machine (SVM-based classification) to identify and classify different plant leaf diseases. Compared with other existing methods, the detection accuracy is improved. Mazzia et al.  scholars have proposed an LC&CC deep learning model that combines recurrent neural networks (RNNs) with convolutional neural networks (CNNs) combined, which reduces manual feature stage modeling. Çetin et al.  scholars adopted six different machine learning algorithms (decision tree, DT; random forest, RF; support vector machine, SVM Multiple Linear Regression, MLR; Naive Bayes, NB; and Multilayer Perceptron, MLP) to evaluate and classify six different sunflower oilseed varieties. In order to more effectively control agricultural pests and diseases, minimize the use of pesticides, and achieve better crop management and production based on neural network algorithms and K-means clustering, Scholars Chodey et al.  proposed an improved dragonfly algorithm. James and Sujatha  scholars proposed a mixed neural clustering (HNC) classifier to classify ten apple fruit diseases in order to reduce the damage of fruit diseases to agricultural economic losses and production.
Scholars Bankar et al.  proposed a plant disease identification method based on color, edge detection, and histogram matching to solve the effects of crop diseases on plants. Golhani et al.  scholars successfully detected virus-infected oil palm seedlings at the nursery stage by means of spectral screening. Scholars such as Kurmi and Gangwar  fused information extracted from available resources and optimized it to enhance recognition results. Sakai et al.  used deep neural networks (DNNs) to extract and learn targets to achieve target category recognition. Deep learning is applied to vegetable object recognition, and 8 kinds of vegetables and fruits are identified, and the recognition accuracy and accuracy rate have reached a good level. Di and Qu  proposed a method for detecting apple leaf diseases based on Tiny-YOLO, with models mAP and mIoU of 99.86% and mIoU, respectively, 83.54%. Li et al.  proposed an improved Faster R-CNN bitter melon leaf disease detection method, which was integrated into the characteristic pyramid network (FPN), and the average accuracy was increased by 7.54% compared with the original. In the study by Zhang et al. , the recognition accuracy of the tobacco disease identification model based on the transfer learning method of the InceptionV3 network is 90.80%. Fan et al.  improved the regional convolutional neural network Faster R-CNN algorithm and optimized the training model by using the stochastic gradient descent algorithm to achieve intelligent diagnosis of maize diseases with complex background and similar spot characteristics in the field environment. Based on previous research, this paper proposes a soybean disease identification method based on deep learning. +is method adopts a new spot cutting method and identifies lesions based on continuous convolutional and sparse Maxout convolutional neural networks. +e model construction took soybean brown spot as the research object and the model was verified. +is study has a positive effect on rapid diagnosis of soybean disease, efficient management of soybean field, and reduction of management cost.
A. Soybean Disease Image Acquisition.
+e collection of high-quality soybean disease pictures and the pretreatment of the areas of disease that have received research attention are prerequisites for soybean disease classification. If there are few valid pictures in the sample database, or the effect of spot segmentation is not good, it will obviously have a negative impact on the classification and identification work in the later stage. +erefore, when collecting database pictures, this article adopts a high-definition digital camera to ensure the quality of the pictures taken, which avoid collecting pictures of debris such as dirt insects, strong light exposure, and people or other objects in the background during the shooting process. In this paper, many images of soybean plants with brown spot were collected based on soybean brown spot. +e highdefinition camera is used to collect photos of soybean disease in the early and middle stages, and then the collected pictures are batched into a uniform size through professional tools.
B. Soybean Disease Image Background Removal
Since the follow-up study was only conducted on the leaves and the collected soybean disease pictures all had complex backgrounds, the first thing to do was to remove the interference caused by the background and separate the soybean leaves from the original picture. +e crop disease pictures collected by shooting have complex backgrounds, and the Grabcut algorithm can be used to remove the background information. Its principle is an improvement on the GraphCut algorithm, which is iterative GraphCut. +e algorithm uses the texture (color) information and boundary (contrast) information in the image, and a small amount of user interaction can obtain a better segmentation result. +is method is used to automatically identify the background area of the entire image and then discard the background area; that is, the RGB pixel value of the background area is set to (0, 0, 0). +e original image and the background-removed image
C. Spot Cutting Method Based on HSI and Lab Color Space
Since the color difference between the diseased spot and the surrounding normal area of the leaf is clearly expressed, a method of cutting the diseased spot based on Hue-Saturation-Intensity (HSI) and Lab color space is proposed to take advantage of this characteristic. +e HSI (Hue-SaturationIntensity) color model  reflects the way the human visual system perceives color and separates the color information from the grayscale information and is not sensitive to changes in light sources, so in HSI Segmenting, the diseased areas of the image are more effective in the color space. In the HSI color model, Hue reflects the human eye’s perception of color attributes such as red, green, and yellow. Saturation indicates the purity of a color. Intensity indicates how bright and dark a color is. +e HSI color model is represented as a double hexagonal pyramid model.
D. Pretreatment Results and Analysis
In order to verify the segmentation effect of the proposed method, a comparative experiment of different algorithms is carried out.
+e experiment applies OTSU algorithm, Ultragreen feature algorithm, Genetic algorithm, Lab grayscale map + OTSU algorithm, and the method proposed in this paper for disease spot image segmentation, respectively, and show the segmentation effect of 5 algorithms. As can be seen from Figure 4, the original image is directly OTSU segmented, and the result is green with many leaves healthy areas, and the segmentation effect is not very ideal. If the genetic algorithm and OTSU combined segmentation method are used, it is determined by Figure 4(c). +e final effect can be seen that the effect is similar to or not much improved by direct OTSU segmentation because both methods are based on threshold segmentation, ignoring the important property of color. Although the ultragreen feature method targets the physiological characteristics of green plants, the results are still not ideal, and some green areas are still not removed. +e OTSU in the Lab color space has a relatively good effect compared with the previous method, but it is easy to connect the diseased lesions, and the final result is not satisfactory. +is method combines the advantages of Lab and HSI color spaces and takes into account the color characteristics of the leaves, through the means of combining the images segmented by the two methods, the difference between the strong points and the contours of the segmented lesions is clear, and the green background residue is less, and the segmentation results are satisfactory.
Although the results obtained in this paper are the best, the time taken is slightly longer than that of the previous methods due to the large number of steps performed. Statistically, the average running time of several segmentation methods described in this study is shown in Table 1. +e genetic algorithm is too slow to apply to the fragmentation of disease lesions in the database of this article. Although the method proposed in this paper sacrifices part of the running time, it can be seen that the final effect has been greatly improved according to the comparison of Figure 4.
For the current level of computer hardware, the millisecond level time increase is completely acceptable, which can meet the actual application requirements.
III. DATA AUGMENTATION
Data Augmentation is a technique for increasing training datasets without having to gather new images. Data augmentation alters the original images in some way. This is accomplished by using various processing techniques including as rotations, flips, zooming, and adding noise, among others. Large training datasets are significant in deep learning since they improve the training model's accuracy. It also aids in the avoidance of overfitting. The downsides of data augmentation include increased training time, transformation computation costs, and higher memory expenses. The dataset is divided into two parts where 80% is used for training and 20% is used for testing.
IV. RELATED WORK
With the continuous integration of computer technology into agricultural production, it also reduces personnel costs while improving agricultural production efficiency and quality. Crop disease control is an important part of agricultural production, and the use of computer vision technology to quickly and accurately identify crop diseases is an important means of ensuring a good harvest of agricultural products and promoting agricultural modernization. In this paper, a recognition method based on deep learning is proposed based on soybean brown spot.
The method is divided into image pre-treatment and disease identification. Based on traditional threshold segmentation, the pre-processing process first uses the HSI colour space to filter the information of the normal area of the leaf, adopts OTSU to set the threshold to segment the original image under the Lab colour space, and then merges the segmented images. The final spot segmentation image is obtained.
Many plant diseases have distinct visual symptoms, which can be used to identify and classify them correctly. This article presents a soybean disease classification algorithm that leverages these distinct appearances and advances in computer vision made possible by deep learning. The collection of high-quality soybean disease pictures and the pre-treatment of the areas of disease that have received research attention are prerequisites for soybean disease classification. If there are few valid pictures in the sample database, or the effect of spot segmentation is not good, it will obviously have a negative impact on the classification and identification work in the later stage.
The deep learning model uses colour images to learn the attributes that show different patterns that can be distinguished with the help of a convolutional neural network model. The execution measure of the proposed model is investigated using the PlantVillage dataset.
The simulating replica outcomes show that the performance of the proposed model is far better as compared to the existing well-known methods of the domain with mean classifying accuracy and area under the characteristics curve of 95.35% and 94.7%, individually.
V. WORKING OF GLCM
A. Feature Extraction
The features in the image of the select cluster are extracted. The cluster images are normally grey scale image where GLCM techniques (grey level co- occurrence matrix) is used, in this technique the texture features is analysed. The further level analysis is achieved by co-occurrence through two pixels are plotted in matrix, forming it perfect tool of choice for analysis. The extracted features are such as contrast, correlation, energy and homogeneity done by grey co matrix. Contrast differentiate an element and their nearby of the image by intensity variation. In SGDM, Energy is termed sum of square elements. Whereas Homogeneity is measured based on the distribution of elements in SGDM. Correlation is the Returns a measure of however correlative an element is to its neighbour over the full image.
In this system Support Vector Machine (SVM) classification technique is used. The Support Vector Machine uses decision planes that defines the decision boundaries. It is used for the classification and regression method. The classification means the output is chosen between the two classes. The regression means the real values. Some of the problems of the texture classification makes use of the SVM classifier. The high dimensional space in SVM is performed by mapping nonlinear data into linear form. The maximum width at the plane distance is largest between the different classes using SVM classify. The classes are divided into the different kernels methods. Linear classifier is used to examine the hyper plane and the samples which are closer to the plane will be chosen. The multiclass classification either uses the one-to-one or one- to-many. version of the image is displayed. i.e., contrast enhanced image is displayed.
Segmentation is performed one of the image is selected based on ROI (Region of Interest).
Classification of disease image
By analyzing the ROI region the classifier detect the defects from leaf image. Computation accuracy will be displayed. Kernel function is changed also.
The training samples are used in the SVM classifier. A standard format of the SVM solves two class problems. From the binary problems can be extended to multiclass SVM with K classes. Where K>2. It has a two approaches, it is one- against-one and one-against-all. After the training phase, the features were extracted to classify the database in SVM classification.
V. CHALLENGES IN MODEL TRAINING
In this paper, the brown spot in soybean diseases is used as the research object, and the soybean disease is identified by image processing and pattern recognition. Some common image segmentation algorithms such as OTSU, ultra-green feature method, genetic algorithm, and threshold segmentation under Lab grayscale map are used to segment the lesions, and the results are filtered by median and corroded expansion. In this paper, the lesions obtained by using OTSU segmentation under the Lab colour space and the diseased lesions obtained by filtering the green region under the HSI colour space will be taken as the final segmented plaque map. By comparing with other methods to split the renderings, the effectiveness of the proposed method is verified. This paper introduces deep learning to deal with the problem of disease classification of large samples of soy- beans. Through the structural design of the continuous convolutional layer and the sparse Max-out activation function layer, the entire convolutional neural network not only has strong feature extraction and nonlinear expression capabilities but also ensures that the whole network is not too bloated, and the number of parameters is limited to a certain extent.
 E. Miao, Guixia Zhou , and Shengxue Zhao “Research on Soybean Disease Identification Method Based on Deep Learning” Published 22 August 2022.  W. Zhang, “Identification based on inception V3,” Chinese Journal of tobacco, vol. 27, no. 5, pp. 61–70, 2021.  M. D. Chodey and C. Noorullah Shariff, “Neural networkbased pest detection with K-means segmentation: impact of improved dragonfly algorithm,” Journal of Information and Knowledge Management, vol. 20, no. 3, Article ID 2150040, 2021.  G. M. James and S. Sujatha, “Categorising apple fruit diseases employing hybrid neural clustering classifier,” Materials Today Proceedings, no. 8, Article ID 202012139, 2021.  N. Çetin, K. Karaman, E. Beyzi, C. Saglam, and B. Demirel, ? “Comparative evaluation of some quality characteristics of sunflower oilseeds (helianthus annuus L.) through machine learning classifiers,” Food Analytical Methods, vol. 14, no. 8, pp. 1666–1681, 2021.  Y. Kurmi and S. Gangwar, “A leaf image localization based algorithm for different crops disease classification,” Information Processing in Agriculture, 2021.  S. Mishra, R. Sachan, and D. Rajpal, “Deep convolutional neural network based detection system for real-time corn plant disease recognition,” Procedia Computer Science, vol. 167, pp. 2003–2010, 2020.  D. A. Kumar, P. S. Chakravarthi, and K. S. Babu, “Multiclass Support Vector Machine Based Plant Leaf Diseases Identification from Colour, Texture and Shape Features,” in Proceedings of the 2020 ;ird International Conference on Smart Systems and Inventive Technology (ICSSIT), pp. 1220–1226, IEEE, Tirunelveli, India, August 2020.  J. H. Li, L. J. Lin, and K. Tian, “Improved Faster R-CNN for leaf disease detection of bitter gourd in the field,” Chinese Journal of Agricultural Engineering, vol. 36, no. 12, pp. 179– 185, 2020.  X. P. Fan, J. P. Zhou, and Y. Xu, “Identification of maize leaf diseases based on improved regional convolution neural network,” Journal of South China Agricultural University , vol. 41, no. 6, pp. 82–91, 2020.  D. Oppenheim, G. Shani, O. Erlich, and L. Tsror, “Using deep learning for imagebased potato tuber disease detection,” Phytopathology, vol. 109, no. 6, pp. 1083–1087, 2019.  V. Mazzia, A. Khaliq, and M. Chiaberge, “Improvement in land cover and crop classification based on temporal features learning from sentinel-2 data using recurrentconvolutional neural network (R-CNN),” Applied Sciences, vol. 10, no. 1, p. 238, 2019.  K. Golhani, S. K. Balasundram, G. Vadamalai, and B. Pradhan, “Selection of a spectral index for detection of orange spotting disease in oil palm (elaeis guineensis jacq.) using red edge and neural network techniques,” Journal of the Indian Society of Remote Sensing, vol. 47, no. 4, pp. 639–646, 2019.  T. Zha, X. B. Zhong, and Q. Z. Zhou, “Development status of China’s soybean industry and strategies of revitalizing,” Soybean Science, vol. 37, no. 3, pp. 458– 463, 2018.  R. D. L. Pires, W. E. S. Gonçalves, J. F. Orue et al., “Local ˆ descriptors for soybean disease recognition,” Computers and Electronics in Agriculture, vol. 125, pp. 48–55, 2016.  Y. Sakai, T. Oda, M. Ikeda, and L. Barolli, “A vegetable category recognition system using deep neural network,” in Proceedings of the International Conference on Innovative Mobile & Internet Services in Ubiquitous Computing, pp. 189–192, IEEE, Fukuoka, Japan, July 2016.  L. Q. Niu, X. Z. Chen, and S. N. Zhang, “Mode l construction and performance analysis for deep consecutive convolutional neural network,” Journal of Shenyang University of Technology, vol. 38, no. 6, pp. 662–666, 2016.  V. Chelladurai, K. Karuppiah, D. S. Jayas, P. Fields, and N. White, “Detection of Callosobruchus maculatus (F.) infestation in soybean using soft X-ray and NIR hyperspectral imaging techniques,” Journal of Stored Products Research, vol. 57, pp. 43–48, 2014.  S. Bankar, A. Dube, P. Kadam, and S. Deokule, “Plant disease detection techniques using canny edge detection & colour histogram in image processing,” International Journal of Computer Science & Information Technology, vol. 5, no. 2, pp. 1165–1168, 2014.  I. J. Goodfellow, D. Wardefarley, M. Mirza, C. Aaron, and B. Yoshua, “Maxout Networks,” Computer Science , pp. 1319–1327, 2013.
Copyright © 2023 Sahil Bhujbal, Pradnya Mandale , Vaishnavi Aher, Rushikesh Wable. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Authors : Sahil Bhujbal
Paper Id : IJRASET47611
Publish Date : 2022-11-22
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here