1. Trang chủ
  2. » Luận Văn - Báo Cáo

Performance analysis of fine tuned convo

13 3 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 Performance Analysis Of Fine-Tuned Convolutional Neural Network Models For Plant Disease Classification Nilay Ganatra, Atul Patel Faculty of Computer Science and Applications, Charotar University of Science and Technology (CHARUSAT), Changa nilayganatra.mca@charusat.ac.in ,atulpatel.mca@charusat.ac.in Abstract Early identification and detection of plant leaf disease is an essential requirement for sustainable agriculture and optimum yield production In the field of Artificial Intelligence, Deep Learning has emerged as an effective computing paradigm and shows a great potential to solve many computer vision problems Deep convolutional neural network (CNN) is one of the deep leaning architecture that proposes implicit outcomes for image recognition and object detection applications In this research, the benchmark deep CNN models are applied for plant leaf disease identification and classification We have applied and evaluated performance of VGG 16, Inception V4 and ResNet 50 and ResNet 101 The dataset used during the study contains 38 classes and 87000 images We have applied transfer learning for training the models and finetuned the pre-trained models used After evaluating the performance, it has been found that ResNet 50 and ResNet 101 exhibit test accuracy 99.70% and 99.73% respectively, whereas Inception V4 achieved 98.36% and VGG16 reached to 81.63% Thus, ResNet50 and ResNet101 have been appeared with promising results for plant leaf diseases identification and classification Keywords: Convolutional Neural Network, Plant Leaf Disease Classification, Transfer Learning Introduction The normal condition and growth of plant interrupts by various plant diseases Plant diseases are one of major reason behind less production that turns to economic losses For sustainable agriculture and optimum yield production, detection of diseases in plants is an essential requirement as it will increase the yield more than 60% of the total productivity Food and Agriculture Organization (FAO) estimated that 20% to 40% of global food production is affected by the pests and diseases and created major hazard to food security [1] Use of pesticides may protect plant from the disease or infection and thus retain yields However usage of pesticides is environmentally harmful and negatively affects the biodiversity which includes air, water, birds, insects, soil and fishes under the water It also creates risk for human health with acute and chronic effects To limit the usage of unsafe substance, such as pesticides, a field’s phytosanitary conditions knowledge plays an important role It helps farmer to carry out right practice in the affected area at required time However, it requires an expertise to measure the healthiness of the field and it is time consuming process Also, it is not a feasible solution to check the condition of the plant many times in a particular season on the farms with large geographical area There are various ways to identify plant pathologies However, most of the diseases generate some appearance on the visible scale which can be primarily examined by the trained experts A phytopathology having good analytical skill can be able to identify the characteristics of disease symptoms [2] However, it could become difficult even for the phytopathology when there is variation in the symptoms by disease affected plants Computerized system, which can be able to identify the disease affected plant by its appearance basic symptoms, makes disease identification task more easy and disease can be identify more accurately Recent technical advancement and availability of low cost devices for the image acquisition have made it possible to gather the large amount of images for application of image based diagnosis [3] Although, digital image contains condensed information which is very difficult to process by computing device and it requires to perform further steps including pre-processing and segmentation in order to obtain various features like color and shape [4, 5] However, 293 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 advancement in the field of computer vision and artificial intelligence provides a platform using which plant disease can be more precisely identified and provides an opportunity to integrate such technology the precision agriculture The deep learning, a subfield of artificial intelligence, makes computer enable to learn the most appropriate set of features of the problem domain autonomously without human intervention In 1989, the new class of neural network rise because of challenging nature of the machine vision tasks named Convolutional Neural Network (CNN) [9] CNN emerged as the best learning algorithms specially for image based understanding It has shown its results in the various visualization related tasks like segmentation, detection, classification [10, 11] The first CNN was introduced in 1980s known as LeNet [12], while studying and applying the neural network to various application was started in 1940s [13] Analysis of plant diseases has been performed using CNNs since its inception [14] With the availability of high computing hardware and advancement in the learning methods makes it feasible to trained large-scale deep CNN in the 2010s The introduction of AlexNet [15] is considered as the major breakthrough in the history of deep learning, which presented the benchmark classification accuracy over the typical machine learning classification approaches in 2012, ImageNet Large Scale Visual Recognition Challenge (LSVRC) [16] With the availability of the high computing devices, the CNN architectures became deeper VGG-16 contains 16 layers, VGG-19[17] has 19 layers, while GoogleNet [18] is a 22-layer deep architecture The winner of LSVRC 2015, ResNet [19] is 152-layer deep network and classification accuracy of this network outperformed the human level performance However, the depth of the CNN architecture, which provides higher accuracy caused significant problem of interpretability and it makes difficult to understand the functioning of the each hidden layers It is difficult to judge the contribution of particular layer in inference and CNN overall approach for identification of disease So, it will make it difficult to validate the model without knowing its internal data processing Initially, Deep learning was perceived as black box [20] and limited the usage of CNNs for real time applications The recent growth in the field of deep learning revealed the content of this said black box Researchers worked upon understanding CNNs by exploring the calculation process as human can understand, identified as visualization The Zeiler and Fergus introduced Deconvolutional NN (DeconvNet) which multilayer, famous as ZfNet [21] provided a way to visualize CNN performance quantitatively and they visualized the activation at various inner layers However, various studies produced the images which maximize the activation at each layer to provide visualization of features which contributes to make decisions [22, 23] To visualize the portion of the input image that is important for classification can obtained using deconvolution [21, 24], guided backpropagation or class activation mapping [25] These approaches have been established using CNN trained with ImageNet, which comprises images from numerous objects from 1000 categories The plant disease dataset is different than other image dataset in terms of size and variation of the features required to extract for classification It is quite difficult to provide accurate classification result for disease classification problem with manually crafted features using machine learning algorithms CNN eliminates the need of hand crafted features that makes the plant disease classification models more robust compared with traditional machine learning models Moreover, it is possible to visualize the disease features extracted at each layer which makes model more visual appealing and helps agriculture experts in understanding stages of the disease classification This paper aims to evaluate different CNN models for the applications in plant leaf disease classification Section presents the background of CNN Section gives the review of related work done Section explains the experimental materials and methods used in the study Section discusses the results and outcomes 294 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 Background of Convolutional Neural Network (CNN) In recent time, CNN is considered as a most widely used Artificial Intelligence based technique for computer vision applications CNN has self-learning capability from gridlike data and recently it leaves behind many machine learning applications while considering performance CNN has ability of feature generation and it obtains insight of the data, therefore typical machine learning system uses CNN capabilities for feature generation and classification The basic components of any CNN architecture are convolution layer, pooling layer, activation function, batch normalization, dropout and fully connected layer The typical architecture of CNN is considered as the st ack of alternate layers of convolution and pooling tailed by fully connected layer In convolutional layer, there are multiple convolutional kernels, where separate neuron considered as kernel Although, the symmetric kernel, convert the convolution operat ion into correlation operation [26] The image is converted into small chunks by this kernel, known as receptive fields These small blocks obtained by dividing image are helpful in obtaining features monograms By multiplying set of weights with the spec ific elements of the receptive fields, kernel convolves along with the images Feature monograms results as an outcome of the convolution operation, and it can happen at various places in the image After obtaining feature, the location becomes least important if its inexact position is preserved relative near others [27] Pooling also known as down-sampling is inner process that combines the associated data inside the region of the receptive field with the outputs the dominant response inside the surrounding region Feature-map size reduction to constant feature not only normalizes the network complexity but also increase the generalization by dropping overfitting [28] There are various types of pooling formulations used in CNN like average, max, overlapping, L2 etc The activation function is a node that can be place at the end or in between the network It helps to decide whether neuron is fire or not So, it is the decision function which helps in understanding complex patterns It is the nonlinear transformation over the input signal and transformed output is send to the next layer of the network The commonly used activation function by most of the literatures are ReLU, leaky ReLu, sigmoid, maxout, SWISH and many more are used to impart non-linear combination of different features [29] Another component of CNN architecture is a batch normalization, which is used to standardize the input layer in the deep network It makes neural network training fast and performance of the model can also be improved by it The term dropout in CNN refers to neurons both visible and invisible in a network [30] It adds regularization with in a network to improve generalization by dropping units or connection randomly with some probability Sometimes, in CNN, overfitting problem arises because of coadaptation of the multiple connections that learn non-linear relation Several thinned architectures of network produced by this random dropping and at the end, network with small weights selected The final layer of the CNN architecture is a fully connected layer In this layer, the neurons have complete connection to all the activations from the previous layers Contrasting convolution and pooling, it is considered as a global operation [31] However, in some networks a global pooling layer substitutes the fully connected layer To optimize the performance of CNN along with mapping functions, batch normalization and dropout regulatory units are incorporated with the CNN The placing of components of CNN model plays a key role in new architecture design with improved performance The following figure presents the state-of-the-art architecture of a CNN 295 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 Figure A classic Convolution Neural Network (CNN) architecture Related Work Various researchers used CNN based models for solving complex tasks Sibiya M et al [32] used CNN for the diseases classification in plants of maize They used histogram techniques to show the model impact They were able to achieve overall model accuracy 92.85% Zhang K et al [33] applied CNN architectures AlexNet, ResNet and GoogleNet for identifying the tomato leaf diseases ResNet outperformed over other networks with the highest accuracy of 92.28% In the paper presented by Amara J et al [34], LeNet architecture was used to detect the banana leaf diseases Here, authors used the CA and F1score to evaluate the model using graysacle image and color image Konstantinos P Ferentinos [35] compared the classification accuracy of the leaf disease using AlexNet, GoogleNet and VGG CNN architecture The VGG outperformed then all other networks with the plant, disease performance reaching to 99.53% Türkoglu M et al [36] classified different plant disease using different classifiers They considered KNN, SVM and ELM combined with the features obtained from the state-of-the-art deep learning models They considered various CNN models like ResNet-50 and 101, InceptionResNetv2 and Inception-v3 and other models The ResNet-50 with SVM provided best result evaluated using different performance metrics Amanda Ramcharan et al [37] used Inception-V3 for the detection of cassava disease and they are able to achieve average accuracy around 95% with six classes of the disease Fujita E [38] used the two different variations of CNN and achieved the 82.3% accuracy in classification of the cucumber plant diseases The tomato disease classification using CNN was performed by Yamamoto K et al using highresolution, super-resolution and low-resolution to evaluate super-resolution accuracy over other methods [39] Results in the paper indicated super-resolution method outperformed conventional methods with large margin in term of accuracy Durmu s H et al [40] presented classification of tomato plant disease using pre-trained nets AlexNet and SqueezNet V1.1 However, performance of AlexNet outperforms with the accuracy of the 95.65% in disease classification Edna Chebet Too and et al in [41] presented the comparison deep learning nets VGG 16, Inception-V4, ResNet-50, ResNet-101, ResNet152 and DenseNet-121 for leaf disease classification The results in the papers presented that DenseNet requires less number of parameters compare to other models and achieved the accuracy of 99.75% Rangarajan A.K et al [42] were performed classification of tomato leaf diseases using AlexNet and VGG-16 deep learning architectures, and AlexNet provided best accuracy 96.38% with minibatch size of 32 and learning rate of 40 Brahimi M et al [43] presented saliency map for symptoms visualization of plant disease The accuracy achieved by the proposed architecture was 99.76% Sladojevic S et al [44] identified different types of 13 diseases with the help of CaffeNet CNN model and obtained 96.30% classification accuracy which was better compared to typical machine learning algorithm like SVM Sharada P Mohanty et al in [45] compared performance of the two CNN architectures i.e AlexNet and GoogleNet on PlantVillage dataset of leaf diseases The performance measures considered were precision, F1 score, recall and accuracy of the model They have done implementation on three scenario i.e color, grayscale and segmented images for measuring the CNN performance where they found that GoogleNet outperformed AlexNet 296 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 Based on the review conducted, it has been found that, for end to end learning, in many domains deep neural networks applied successfully It provides mapping between an image of diseased leaf (input) to crop-disease pair (output) The major challenge associated with creating deep neural network is that structure of the network where it is essential to correctly map nodes and edge weights from the input to the output Deep neural network training has been done by fine tuning the network parameters using process that mapping between input and output layer and it improves during the training process This challenging process improved dramatically by various conceptual and engineering breakthroughs in recent times To develop deep neural network based accurate plant disease diagnosis model, large and confirmed dataset of images of healthy and diseased plants are needed These large dataset was not available until very recently, and even dataset with few images were not also freely available But in 2015, the project PlanVillage has begun and provided thousands of images of diseased and healthy crops plants openly and freely However, after this many dataset were introduced considering PlantVillage as a base Experimental Materials and Methods In this paper, various state-of-the-art CNN modles are experimented and evaluated by fine-tuning it for plant disease identification and classification The models evaluated include Inception V4, VGG-16, ResNet 50 and ResNet 101 4.1 Dataset Description For the experiment, we have considered 87000 plants leaves images of diseased plant leaves and health plant leaves, which is divided into 38 labeled classes given to them Each class label presents the crop-disease pair, and we have attempted to create model using transfer learning to predict the pair of crop-disease by providing only the image of the leaf The dataset used in the paper created using offline augmentation form the original PlantVillage Dataset [write reference] which contains approximately 55,000 images The dataset is divided into 80/20 ratio of training and validation set respectively Moreover, for testing, a fresh and unseen dataset of 33 images is considered for evaluation model predictions We have resized the images according to the requirements for fine-tuning the model and both prediction and optimization is performed on these downscaled images The following figure represents the plant leaf disease classes and number of images for each disease available in the dataset 297 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 Figure-1 Dataset Description 4.2 CNN Models for Image Classification In this study, we have considered benchmark CNN models include VGG Net, ResNet and Inception V4 VGG Net The applications of CNN for the computer vision and image recognition tasks foster the development of architectural design In 2015, Simonyan and Zisserman [17] introduced the design of effective and simple CNN architecture for the challenge ILSVRC-2014 They were able to secure the second place in the challenge as on the validation set, it secured 7.5% top-5 error rate The architecture is modular in layers pattern Compare to its predecessor architecture AlexNet and ZfNe, it is deep as 19 layers to map the depth with the representational capacity of the network With the stack of 3x3 filters layers, VGG has replaced the 11x11 and 5x5 filters as the size of the filter affects the overall performance of the network Small size filter introduced the advantage of low computational complexity as it reduces the number of parameters To regulate the complexity of the network, 1x1 convolutions placed between the convolutional layers that help in learning the resulted feature-map linear combination The max-pooling is placed later the convolutional layer for the network tuning and for preserving the spatial resolution, padding is performed We have replaced the original softmax layer of VGG Net in this study for fine-tuning the VGG16 network It is essential to perform multiclass classification for leaf disease classification as the dataset contains 38 class labels The pre-trained VGG16 with weights from ImageNet is used in the experiment ResNet He et al [46] devised the ResNet in 2015 on the origin of COCO 2015 and ILSVRC 2015 classification challenge ResNet was the winner of the challenge with the error rate of 3.57% The ResNet architecture was introduced with motivation of the incapability of nonlinear layers of the network to learn identity mappings and degradation problem The CNN architecture was transformed by ResNet with the concept of residual learning It is 298 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 provided an efficient methodology for deep neural network training The ResN et architecture is 20 times deeper with compare to AlexNet and times deeper compare to VGG net It had proposed a 152-layer deep CNN that presented fewer computational complexities than the others [19] The reason behind the deep architecture of ResNet is the global average pooling rather than fully-connected layers Here in the experiment, a ResNet model with 50 layers is loaded with the pre-trained weights of the ImageNet At the end, the softmax layer was replaced with the customized layer for the plant diseases identification with 38 classes Inception V4 In 2015, Szegedy et al [47] introduced the concept of inception based on GoogleNet architecture After that, the subsequent releases of GoogNet architecture have been stated as Inception vN, where N is referred as version number The computational cost is reduced with the introduction to Inception-V3 without disturbing the generalization To fulfill the said motive, asymmetric and small sized filters (1x7 and 1x5) were used in place of large size filter (5x5 and 7x7) and 1x1 convolution used as block before the large filters The old convolution operation like cross-channel correlation changes the placement of 1x1 concurrent convolutions with large filter The convolution in the network is varied in the size of 1x1, 3x3 and 5x5 Szegedy et al [126] used 1x1 convolution operations to map the input data into discrete three or four spaces These are smaller as compare to the original input space The 3x3 or 5x5 convolutions were used to map all correlations in 3D smaller spaces Like other models, the softmax layer of this model was replaced with the customized layer for performing disease classification 4.3 Fine Tuning with Transfer Learning CNN model require performing training with huge dataset As CNN models are based on artificial neural network architecture, backpropagation algorithm normally used It is used as a learning algorithm and its goal is to minimize a cost function Cost function is used to measures the total error generated by CNN model follows supervised learning method, where a huge labelled dataset is used during training There are two methods available to train the CNN model; trained model from scratch or trained model with transfer learning Significant efforts and time are required to train a model from scratch, therefore transfer learning is a usual approach followed to train the CNN model Transfer learning is a process of applying the knowledge obtained during solving one type of a problem to an altered but correlated problem Transfer learning requires pre-trained models [48] A pretrained model is a network that already trained on a large dataset (e.g ImageNet, COCO, PASCAL) using high computation resources like GPU and takes hours to complete the learning The weights and learning of pre-trained model is transfer in the CNN model and fine tuning is performed by freezing some layers Fine tuning is a process to give new dataset to the pre-trained model where only final layers of pre-trained model are changed Fine tuning is much faster and accurate compare to train the whole model from scratch [49] Here, in this experiment, the pre-trained models have been used which are trained on ImageNet dataset This dataset contains approximately 1.2 million images which are having 1000 class categories The plant leaf dataset used in this research contains 87000 images that categorized into 38 classes As it considered very small dataset for deep learning problems, we have used here the weights of ImageNet Dataset Fine tuning has been performed by removing the final layer and introducing the new fully-connected softmax layer for VGG16, InceptionV4, ResNet50 and ResNet101 models 299 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 4.4 Experiment Setup We have evaluated performances of models on the Google Colab GPU To carry out an experiment, we have used standard software libraries includes Python, Keras, Tensorflow, Sklearn, Matplotlib and NumPy Python is a widely used scripting language for data science, machine learning and deep learning tasks It is a high-level, open source language enabled with good community support Variety of libraries and frameworks are available in Python for implementing deep learning algorithms We have used Python libraries include NumPy, Matplotlib, Sklearn and frameworks include Keras, Tensorflow for conducting an experiment Keras is a high level API that uses Tensorflow or Theano as a backend It is very simple and comes with very good documentation and learning resources It provides a convenient way to build CNN models TensorFlow is an open-source library that offers both highlevel and low-level APIs It contains vast functions that support all types of numeric calculations used in machine learning problems It is a full-fledged framework that works on multidimensional arrays called tensors It offers multiple level of abstraction while developing and training the model For enhancing the data plotting and visualization, Python provides NumPy and Matplotlib NumPy is a numeric Python and offers different types to store the values Matplotlib offer s a wide set of functions that support to visualize data into different forms Exporting and embedding to different file formats are available with interactive environments Scikit-learn or sklearn is an open-source library for performing machine learning tasks It uses NumPy for performing linear algebra and array operations extensively It provides support for various algorithms like support vector machine, random forest, k-nearest neighbors etc Results and Discussion 5.1 Training To evaluate the models, accuracy metric, categorical cross-entropy loss, mean squared error and mean absolute error are considered during an experiment Standardized hyper parameters are used on all the networks and presented in Table-1 It contains epoch, batch size, momentum and initial learning rate Table-1 Training parameter values for CNN Models Parameter Epoch Batch size Momentum Initial learning rate Value 30 32 0.9 1e-4 5.2 Training and Validation Losses The aim of a good training is to decrease the loss for a CNN model by optimizing its weights Loss is an error occurred in prediction by a model Loss function matches the target value and the value predicted by the model to calculate the total loss It is possible to estimate the loss of the model for updating the weights in next evaluation and thus reduce the loss Normally, mean square error and cross entropy are used to measure the loss To stop the training process, the general tendency is, decreasing the loss while increasing the accuracy Moreover, it can be measured during training and validation process The performance values of accuracy and loss and graphically described and together used to determine the performance of the CNN model Here, a line plot is created that shows the loss and accuracy over the epochs for both the training and validation The following figure 3,4,5 and represented the training and validation accuracy and loss of CNN models 300 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 Figure-3 Inception V4 model training and validation accuracy and loss Figure-4 VGG 16 model training and validation accuracy and loss Figure-5 ResNet 50 model training and validation accuracy and loss 301 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 Figure-6 ResNet 101 model training and validation accuracy and loss Discussion To determine the performance of the models test dataset based overall loss score and accuracy are computed and used The results of the experiments are presented in Table Table-2 Training, Validation and Testing Accuracy of Fine-tuned Models Model Params Inception V4 VGG 16 41.2 M ResNet 50 ResNet 101 Accuracy Accuracy Accuracy (Training) (Validation) (Testing) (%) (%) (%) 99.65 98.30 98.36 Loss Loss (Training) (Validation) Loss (Test) 0.0160 0.0643 0.674 119.6 M 23.6M 83.43 82.30 81.63 0.6089 0.6978 0.7021 99.85 99.76 99.70 6.436e-04 0.0210 0.0317 42.5 M 99.87 99.80 99.73 5.156e-04 0.0208 0.0300 In this paper, performance comparison of various state-of-the-art deep convolutional neural networks for the plant disease classification was done based on the image dataset We have fine-tuned the VGG 16, ResNet 50, ResNet 101 and InceptionV4 models Various deep learning models training and fine-tuning was performed in the experimentation and results are presented in the Fig 3-6 Models like ResNet 50 and ResNet101, performed well with fewer iterations as well as more number of iterations There is a significance increase in the accuracy and reduction in loss after certain number of iterations in the model VGG 16 Overall ResNet 101 was able to achieve highest accuracy among other models while offers lowest log loss Whereas VGG 16 offered low accuracy compared with other deep convolutional architectures Deep learning is more dominant field of machine learning for the computer vision problems Recent development in the field of image processing and deep learning provided the opportunity to solve the problem of plant disease classification using images in more efficient way As shown in the work deeper networks provided more accurate results and efficient in training Although, increasing in the depth introduced other challenges like internal covariate shift, accuracy degradation, vanishing gradient and overfitting of the network Also, deep networks are computationally more costly to train However, various 302 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 solutions are available to deal with such problem such as transfer learning, batch normalization, skip connections, dropouts and optimization methods Conclusion In this paper, image-based plant disease classification is performed by fine-tuning various convolutional neural networks Various architectures like VGG 16, Inception V4, ResNet 50 and ResNet101 evaluated and compared As shown in the experiment, ResNet 50 and ResNet 101 provided more accurate classification with certain number of epochs without any overfitting or performance deterioration problem ResNet 50 and ResNet 101 exhibit test accuracy 99.70% and 99.73% respectively ResNet 50 requires less number of parameters and time to obtain the classification results Therefore, ResNet 50 and ResNet 101 are good architectures for solving the problem of image-based plant disease detection and classification Other deep learning architecture can also be fine-tuned and applied for the plant disease classification using transfer learning References [1] Food and Agriculture Organization of the United Nation, International Plant Protection Convention, 2017 [2] M B Riley, M R Williamson, and O Maloy, “Plant disease diagnosis The Plant Health Instructor,” 2002 [3] J S West, C Bravo, R Oberti, D Lemaire, D Moshou, and H A McCartney, “Te potential of optical canopy measurement for targeted control of feld crop diseases,” Annual Review of Phytopathology, vol 41, pp 593–614, 2003 [4] A Singh, B Ganapathysubramanian, A K Singh, and S Sarkar, “Machine learning for high-throughput stress phenotyping in plants,” Trends in Plant Science, vol 21, no 2, pp 110–124, 2016 [5] A Johannes, A Picon, A Alvarez-Gila et al., “Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case,” Computers and Electronics in Agriculture, vol 138, pp 200–209, 2017 [6] Y LeCun et al., “Backpropagation applied to handwritten zip code recognition,” Neural Comput., vol 1, no 4, pp 541–551, 1989 [7] X Liu, Z Deng, and Y Yang, “Recent progress in semantic image segmentation,” Artif Intell Rev., vol 52, no 2, pp 1089–1106, 2019 [8] D Ciresan, A Giusti, L M Gambardella, and J Schmidhuber, “Deep neural networks segment neuronal membranes in electron microscopy images,” in Advances in neural information processing systems, 2012, pp 2843–2851 [9] Y LeCun et al., “Backpropagation applied to handwritten zip code recognition,” Neural Comput., vol 1, no 4, pp 541–551, 1989 [10] X Liu, Z Deng, and Y Yang, “Recent progress in semantic image segmentation,” Artif Intell Rev., vol 52, no 2, pp 1089–1106, 2019 [11] D Ciresan, A Giusti, L M Gambardella, and J Schmidhuber, “Deep neural networks segment neuronal membranes in electron microscopy images,” in Advances in neural information processing systems, 2012, pp 2843–2851 [12] Y LeCun, B Boser, J S Denker et al., “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol 1, no 4, pp 541–551, 1989 [13] W S McCulloch and W Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bulletin of Mathematical Biophysics, vol 5, no 4, pp 115–133, 1943 [14] M Oide, S Ninomiya, and N Takahashi, “Perceptron neural network to evaluate soybean plant shape,” in Proceedings of 1995 International Conference on Neural Networks (ICNN), pp 560– 563, 1995 [15] A Krizhevsky, I Sutskever, and G E Hinton, “ImageNet classifcation with deep convolutional neural networks,” Communications of the ACM, vol 60, no 6, pp 84–90, 2017 303 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 [16] O Russakovsky, J Deng, H Su et al., “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol 115, no 3, pp 211–252, 2015 [17] K Simonyan and A Zisserman, “Very deep convolutional networks for large-scale image recognition,” https://arxiv.org/abs/ 1409.1556, 2014 [18] C Szegedy, W Liu, Y Jia et al., “Going deeper with convolutions,” in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–9, 2015 [19] K He, X Zhang, S Ren, and J Sun, “Deep residual learning for image recognition,” in Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 770–778, 2016 [20] G B Goh, C Siegel, A Vishnu, N Hodas, and N Baker, “How much chemistry does a deep neural network need to know to make accurate predictions?” in Proceedings of 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp 1340–1349, 2018 [21] M D Zeiler and R Fergus, “Visualizing and understanding convolutional networks,” in Proceedings of the 2014 European Conference on Computer Vision (ECCV), 2014 [22] K Simonyan, V Andrea, and A Zisserman, “Deep inside convolutional networks: visualising image classifcation models and saliency maps,” https://arxiv.org/abs/1312.6034v2, 2013 [23] J Yosinski, J Clune, A Nguyen, T Fuchs, and H Lipson, “Understanding neural networks through deep visualization,” in Proceedings of the 2015 ICML Workshop on Deep Learning, 2015 [24] C Gan, N Wang, Y Yang, D.-Y Yeung, and A G Hauptmann, “DevNet: a deep event network for multimedia event detection and evidence recounting,” in Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2568– 2577, 2015 [25] B Zhou, A Khosla, A Lapedriza, A Oliva, and A Torralba, “Learning deep features for discriminative localization,” in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2921–2929, 2016 [26] Ian Goodfellow, Y Bengio, and A Courville, “Deep learning,” Nat Methods, vol 13, no 1, p 35, 2017 [27] J Bouvrie, “1 Introduction Notes on Convolutional Neural Networks,” 2006 [28] C.-Y Lee, P W Gallagher, and Z Tu, “Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree,” in Artificial Intelligence and Statistics, 2016, pp 464–472 [29] Khan, Asifullah & Sohail, Anabia & Zahoora, Umme & Saeed, Aqsa (2019) A Survey of the Recent Architectures of Deep Convolutional Neural Networks Artificial Intelligence Review 10.1007/s10462-020-09825-6 [30] S Ioffe and C Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” 2015 [31] M Lin, Q Chen, and S Yan, “Network In Network,” pp 1–10, 2013 [32] Sibiya, M.; Sumbwanyambe, M A Computational Procedure for the Recognition and Classification of Maize Leaf Diseases Out of Healthy Leaves Using Convolutional Neural Networks AgriEngineering 2019, 1, 119–131 [CrossRef] [33] Zhang, K.; Wu, Q.; Liu, A.; Meng, X Can Deep Learning Identify Tomato Leaf Disease? Adv Multimed 2018, 2018, 10 [CrossRef] [34] Amara, J.; Bouaziz, B.; Algergawy, A A Deep Learning-based Approach for Banana Leaf Diseases Classification In Proceedings of the BTW (Workshops), Stuttgart, Germany, 6–10 March 2017; pp 79–88 [35] Ferentinos, K.P Deep learning models for plant disease detection and diagnosis Comput Electron Agric 2018, 145, 311–318 [36] TÜRKOGLU, M.; Hanbay, D Plant disease and pest detection using deep learning-based features ˘ Turk J Electr Eng Comput Sci 2019, 27, 1636–1651 [37] Ramcharan, A.; Baranowski, K.; McCloskey, P.; Ahmed, B.; Legg, J.; Hughes, D.P Deep learning for image-based cassava disease detection Front Plant Sci 2017, 8, 1852 304 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC International Journal of Control and Automation Vol 13, No 3, (2020), pp 293-305 [38] Fujita, E.; Kawasaki, Y.; Uga, H.; Kagiwada, S.; Iyatomi, H Basic investigation on a robust and practical plant diagnostic system In Proceedings of the 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, CA, USA, 18–20 December 2016; pp 989–992 [39] Yamamoto, K.; Togami, T.; Yamaguchi, N Super-resolution of plant disease images for the acceleration of image-based phenotyping and vigor diagnosis in agriculture Sensors 2017, 17, 2557 [40] Durmu¸s, H.; Güne¸s, E.O.; Kırcı, M Disease detection on the leaves of the tomato plants by using deep learning In Proceedings of the 2017 6th International Conference on AgroGeoinformatics, Fairfax, VA, USA, 7–10 August 2017; pp 1–5 [41] Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L A comparative study of fine-tuning deep learning models for plant disease identification Comput Electron Agric 2019, 161, 272– 279 [42] Rangarajan, A.K.; Purushothaman, R.; Ramesh, A Tomato crop disease classification using pre-trained deep learning algorithm Procedia Comput Sci 2018, 133, 1040–1047 [43] Brahimi, M.; Arsenovic, M.; Laraba, S.; Sladojevic, S.; Boukhalfa, K.; Moussaoui, A Deep learning for plant diseases: Detection and saliency map visualisation In Human and Machine Learning; Springer: Berlin, Germany, 2018; pp 93–117 [44] Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D Deep neural networks based recognition of plant diseases by leaf image classification Comput Intell Neurosci 2016, 2016 [45] Mohanty, S.P.; Hughes, D.P.; Salathé, M Using deep learning for image-based plant disease detection Front Plant Sci 2016, 7, 1419 [46] K He, X Zhang, S Ren and J Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp 770-778 [47] C Szegedy, S Ioffe, and V Vanhoucke, ―Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning,‖ arXiv Prepr arXiv1602.07261v2, vol 131, no 2, pp 262–263, 2016 [48] Pan, Sinno Jialin and Qiang Yang “A Survey on Transfer Learning.” IEEE Transactions on Knowledge and Data Engineering 22 (2010): 1345-1359 [49] Mohanty, S.P., Hughes, D.P., Salathé, M., 2016 Using deep learning for image-based plant disease detection Front Plant Sci (September), 1–7 305 ISSN: 2005-4297 IJCA Copyright ⓒ 2020 SERSC ... this paper, performance comparison of various state -of- the-art deep convolutional neural networks for the plant disease classification was done based on the image dataset We have fine- tuned the... monograms By multiplying set of weights with the spec ific elements of the receptive fields, kernel convolves along with the images Feature monograms results as an outcome of the convolution operation,... performance of the two CNN architectures i.e AlexNet and GoogleNet on PlantVillage dataset of leaf diseases The performance measures considered were precision, F1 score, recall and accuracy of

Ngày đăng: 16/12/2022, 13:46

Xem thêm: