Novel YOLOv5 Model for Automatic Detection of Cowpea Leaves: Smart Agriculture

  • Vijaya Choudhary
  • Paramita Guha
  • Sunita Mishra
Keywords: Supervised learning, Precision Agriculture (PA), Cowpea leaves, classification, YOLO

Abstract

Implementing artificial intelligence, specifically deep learning algorithms, to enhance agricultural productivity is a great initiative, especially in a country like India where agriculture is a crucial sector. Using TensorFlow and Keras for this purpose provides a solid foundation, given their popularity and extensive documentation. Using deep learning to identify and classify cowpea leaves can indeed streamline various agricultural processes, such as monitoring plant health, pest detection, and yield estimation. The utilization of YOLOv5, a CNN-based architecture, for the binary classification of cowpea leaves against other leaves like mangoes is a smart choice. Transfer learning can further optimize this model by leveraging pre-trained weights from similar tasks, which can significantly reduce the computational resources and time required for training. As you proceed with this experiments and model development, ensure robust data collection and preprocessing, as the quality of input data greatly influences the performance of deep learning models. Additionally, consider integrating techniques for data augmentation to further enhance the model's generalization capabilities. Continued research and development in this area can lead to significant advancements in agricultural practices, ultimately benefiting farmers and contributing to food security.

References

[1] Kamilaris, A., & Prenafeta-Boldú, F. X. (2018). Deep learning in agriculture: A survey. Computers and Electronics in Agriculture, 147, 70–90.
[2] Voulodimos, A., Doulamis, N., Bebis, G., & Stathaki, T. (2018). Recent Developments in Deep Learning for Engineering Applications.
Computational Intelligence and Neuroscience, 2018, 1– 2
[3] Fujita, E., Kawasaki, Y., Uga, H., Kagiwada, S., & Iyatomi, H. (2016). Basic Investigation on a Robust and Practical Plant Diagnostic System. 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA).
[4] Ubbens, J., Cieslak, M., Prusinkiewicz, P., & Stavness, I. (2018). The use of plant models in deep learning: an application to leaf counting in rosette plants. Plant Methods, 14(1)
[5] JC Rana, NK Gautam, MS Gayacharan, R Yadav, K Tripathi, SK Yadav, “Genetic resources of pulse crops in India: An overview” Indian Journal of Genetics and Plant Breeding 76(4):420-436, Nov 2016
[6] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds. Curran Associates, Inc., 2015, pp. 91–99.
[7] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
[8] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
[9] Z. Wang, P. Yi, K. Jiang, J. Jiang, Z. Han, T. Lu, and J. Ma, “Multimemory convolutional neural network for video super- resolution,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2530–2544, 2018.
[10] J. Ma, X. Wang, and J. Jiang, “Image super-resolution via dense discriminative network,” IEEE Transactions on Industrial Electronics, pp. 1–1, 08 2019, doi: 10.1109/TIE.2019.2934071.
[11] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “Fusiongan: A generative adversarial network for infrared and visible image fusion,” Information Fusion, vol. 48, pp. 11–26, 2019.
[12] J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Information Fusion, vol. 54, pp. 85–98, 2020.
[13] A. Toshev and C. Szegedy, “Deeppose: Human pose estimation via deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 1653–1660.
[14] F. Jia, Y. Lei, J. Lin, X. Zhou, and N. Lu, “Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data,” Mechanical Systems and Signal Processing, vol. 72, pp. 303–315, 2016.
[15] R. Collobert and J. Weston, “A unified architecture for natural language processing: Deep neural networks with multitask learning,” in Proceedings of the 25th international conference on Machine learning. ACM, 2008, pp. 160–167.
Published
2024-04-30
How to Cite
Choudhary, V., Guha, P., & Mishra, S. (2024). Novel YOLOv5 Model for Automatic Detection of Cowpea Leaves: Smart Agriculture. Asian Journal For Convergence In Technology (AJCT) ISSN -2350-1146, 10(1), 49-53. https://doi.org/10.33130/AJCT.2024v10i01.010

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.