{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T15:20:20Z","timestamp":1777476020069,"version":"3.51.4"},"reference-count":40,"publisher":"MDPI AG","issue":"14","license":[{"start":{"date-parts":[[2019,7,19]],"date-time":"2019-07-19T00:00:00Z","timestamp":1563494400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Pests and diseases can cause severe damage to citrus fruits. Farmers used to rely on experienced experts to recognize them, which is a time consuming and costly process. With the popularity of image sensors and the development of computer vision technology, using convolutional neural network (CNN) models to identify pests and diseases has become a recent trend in the field of agriculture. However, many researchers refer to pre-trained models of ImageNet to execute different recognition tasks without considering their own dataset scale, resulting in a waste of computational resources. In this paper, a simple but effective CNN model was developed based on our image dataset. The proposed network was designed from the aspect of parameter efficiency. To achieve this goal, the complexity of cross-channel operation was increased and the frequency of feature reuse was adapted to network depth. Experiment results showed that Weakly DenseNet-16 got the highest classification accuracy with fewer parameters. Because this network is lightweight, it can be used in mobile devices.<\/jats:p>","DOI":"10.3390\/s19143195","type":"journal-article","created":{"date-parts":[[2019,7,22]],"date-time":"2019-07-22T02:55:37Z","timestamp":1563764137000},"page":"3195","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":78,"title":["Citrus Pests and Diseases Recognition Model Using Weakly Dense Connected Convolution Network"],"prefix":"10.3390","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8732-1111","authenticated-orcid":false,"given":"Shuli","family":"Xing","sequence":"first","affiliation":[{"name":"Center for Advanced Image and Information Technology, School of Electronics &amp; Information Engineering, Chon Buk National University, Jeonju, Chon Buk 54896, Korea"}]},{"given":"Marely","family":"Lee","sequence":"additional","affiliation":[{"name":"Center for Advanced Image and Information Technology, School of Electronics &amp; Information Engineering, Chon Buk National University, Jeonju, Chon Buk 54896, Korea"}]},{"given":"Keun-kwang","family":"Lee","sequence":"additional","affiliation":[{"name":"Department of Beauty Arts, Koguryeo College, Naju 520-930, Korea"}]}],"member":"1968","published-online":{"date-parts":[[2019,7,19]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"436","DOI":"10.1038\/nature14539","article-title":"Deep learning","volume":"521","author":"LeCun","year":"2015","journal-title":"Nature"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"518","DOI":"10.1016\/j.compag.2019.01.034","article-title":"PD2SE-Net: Computer-assisted plant disease diagnosis and severity estimation network","volume":"157","author":"Liang","year":"2019","journal-title":"Comput. Electron. Agric."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"351","DOI":"10.1016\/j.compag.2017.08.005","article-title":"Pest identification via deep residual learning in complex background","volume":"141","author":"Cheng","year":"2017","journal-title":"Comput. Electron. Agric."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"319","DOI":"10.1016\/j.compag.2017.11.039","article-title":"Detection of stored-grain insects using deep learning","volume":"145","author":"Shen","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks","volume":"39","author":"Ren","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2015). Rethinking the inception architecture for computer vision. arXiv.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Jeong, J., Park, H., and Kwak, N. (2017). Enhancement of SSD by concatenating feature maps for object detection. arXiv.","DOI":"10.5244\/C.31.76"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"106","DOI":"10.1016\/j.biosystemseng.2019.01.003","article-title":"Detection of sick broilers by digital image processing and deep learning","volume":"179","author":"Zhuang","year":"2019","journal-title":"Biosyst. Eng."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., and Li, F.F. (2009, January 20\u201325). ImageNet: A large-scale hierarchical image database. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_10","unstructured":"Reyes, A.K., Caicedo, J.C., and Camargo, J.E. (2015, January 8\u201311). Fine\u2014Tuning Deep Convolutional Networks for Plant Recognition. Proceedings of the CLEF (Working Notes), Toulouse, France."},{"key":"ref_11","unstructured":"Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H.P. (2016). Pruning Filters for Efficient ConvNets. arXiv."},{"key":"ref_12","unstructured":"Molchanov, P., Tyree, S., Karras, T., Aila, T., and Kautz, J. (2016). Pruning Convolutional Neural Networks for Resource Efficient Inference. arXiv."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., Albanie, S., Sun, G., and Wu, E. (2017). Squeeze-and-Excitation Networks. arXiv.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Woo, S., Park, J., Lee, J.Y., and Kweon, I.S. (2018). CBAM: Convolutional Block Attention Module. arXiv.","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"ref_15","first-page":"1929","article-title":"Dropout: A simple way to prevent neural networks from overfitting","volume":"15","author":"Srivastava","year":"2014","journal-title":"J. Mach. Learn. Res."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Huang, G., Sun, Y., Liu, Z., Sedra, D., and Weinberger, K. (2016). Deep Networks with Stochastic Depth. arXiv.","DOI":"10.1007\/978-3-319-46493-0_39"},{"key":"ref_17","unstructured":"Lin, M., Chen, Q., and Yan, S. (2013). Network in network. arXiv."},{"key":"ref_18","unstructured":"Srivastava, R.K., Greff, K., and Schmidhuber, J. (2015). Highway Networks. arXiv."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition. arXiv.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., van der Maaten, L., and Weinberger, K.Q. (2016). Densely Connected Convolutional Networks. arXiv.","DOI":"10.1109\/CVPR.2017.243"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"9","DOI":"10.1016\/j.compag.2014.09.013","article-title":"Neural identification of selected apple pests","volume":"110","author":"Boniecki","year":"2015","journal-title":"Comput. Electron. Agric."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"102","DOI":"10.1016\/j.compag.2018.12.042","article-title":"SLIC_SVM based leaf diseases saliency map extraction of tea plant","volume":"157","author":"Sun","year":"2019","journal-title":"Comput. Electron. Agric."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"311","DOI":"10.1016\/j.compag.2018.01.009","article-title":"Deep learning models for plant disease detection and diagnosis","volume":"145","author":"Ferentinos","year":"2018","journal-title":"Comput. Electron. Agric."},{"key":"ref_24","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Zagoruyko, S., and Komodakis, N. (2016, January 19\u201322). Wide residual networks. Proceedings of the British Machine Vision Conference (BMVC), York, UK.","DOI":"10.5244\/C.30.87"},{"key":"ref_26","unstructured":"Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., and Tu, Z. (2015, January 9\u201312). Deeply-supervised nets. Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, San Diego, CA, USA."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 22\u201325). Xception: Deep Learning with Depthwise Separable Convolutions. Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_28","unstructured":"Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L. (2018). MobileNetV2: Inverted Residuals and Linear Bottlenecks. arXiv.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Zhang, X., Zhou, X., Lin, M., and Sun, J. (2017). ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. arXiv.","DOI":"10.1109\/CVPR.2018.00716"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Ma, N., Zhang, X., Zheng, H., and Sun, J. (2018). ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. arXiv.","DOI":"10.1007\/978-3-030-01264-9_8"},{"key":"ref_32","unstructured":"Nair, V., and Hinton, G.E. (2010, January 21\u201324). Rectified linear units improve restricted boltzmann machines. Proceedings of the 27th International Conference on Machine Learning (ICML), Haifa, Israel."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Xie, S., Girshick, R., Dollar, P., Tu, Z., and He, K. (2016). Aggregated Residual Transformations for Deep Neural Networks. arXiv.","DOI":"10.1109\/CVPR.2017.634"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"674","DOI":"10.1016\/j.patcog.2018.03.008","article-title":"Handling data irregularities in classification: Foundations, trends, and future challenges","volume":"81","author":"Das","year":"2018","journal-title":"Pattern Recognit."},{"key":"ref_35","unstructured":"Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014, January 8\u201313). How transferable are features in deep neural networks?. Proceedings of the 28th International Conference on Neural Information Processing Systems (NeurIPS), Montreal, PQ, Canada."},{"key":"ref_36","unstructured":"Ioffe, S., and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv."},{"key":"ref_37","unstructured":"Sutskever, I., Martens, J., Dahl, G.E., and Hinton, G.E. (2013, January 16\u201321). On the importance of initialization and momentum in deep learning. Proceedings of the 30th International Conference on Machine Learning (ICML), Atlanta, GA, USA."},{"key":"ref_38","unstructured":"Masters, D., and Luschi, C. (2018). Revisiting Small Batch Training for Deep Neural Networks. arXiv."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015, January 11\u201318). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Proceedings of the 2015 International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.123"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Zeiler, M., and Fergus, R. (2013). Visualizing and Understanding Convolutional Networks. arXiv.","DOI":"10.1007\/978-3-319-10590-1_53"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/14\/3195\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T13:07:43Z","timestamp":1760188063000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/14\/3195"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,7,19]]},"references-count":40,"journal-issue":{"issue":"14","published-online":{"date-parts":[[2019,7]]}},"alternative-id":["s19143195"],"URL":"https:\/\/doi.org\/10.3390\/s19143195","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,7,19]]}}}