{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,15]],"date-time":"2026-01-15T10:38:04Z","timestamp":1768473484239,"version":"3.49.0"},"reference-count":33,"publisher":"Wiley","issue":"1","license":[{"start":{"date-parts":[[2023,8,22]],"date-time":"2023-08-22T00:00:00Z","timestamp":1692662400000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Expert Systems"],"published-print":{"date-parts":[[2025,1]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>With the development of smart technologies (e.g., Internet of Things (IoT), Artificial Intelligence (AI), and Big Data), people have start using them for various purposes. The real\u2010time IoT devices generate an enormous amount of video and imaging data, leading to the concept of complex data structure. And people face major challenges in mining and extraction of useful features and information from such data. How to efficiently analyse and process video data to obtain valuable information has become a key research topic. The traditional manual annotation methods are unable to meet the current demand for the growing number of videos. Therefore, a more convenient method for processing video data needs to be developed. The research objective of this paper is dance videos, and the goal is to realize automatic recognition of dance movements. In this paper, a Dual Convolutional Neural Network Algorithm (DCNNA) is proposed for the automatic recognition of different dance movements in live and remote videos. DCNNA can extract video information more comprehensively and efficiently. It can simultaneously extract the light flow features corresponding to the action changes and the information contained in each frame of the video. Therefore, the dance movements can be more accurately identified. In the experiments, the performance of DCNNA is evaluated based on dance videos and compared with Inception V3 and 3D\u2010CNN. All the experiments illustrate the superior performance of the proposed DCNN algorithm. From the experimental results, it is quite obvious that the F1 score of the proposed DCNNA is 11% and 6% higher than that of the Inception V3 and 3D\u2010CNN, respectively.<\/jats:p>","DOI":"10.1111\/exsy.13422","type":"journal-article","created":{"date-parts":[[2023,8,22]],"date-time":"2023-08-22T21:10:18Z","timestamp":1692738618000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["Pose recognition of dancing images using fuzzy deep learning technique in an <scp>IoT<\/scp> environment"],"prefix":"10.1111","volume":"42","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6119-8530","authenticated-orcid":false,"given":"Dongxia","family":"Zheng","sequence":"first","affiliation":[{"name":"College of Physical Education Hunan University of Science and Technology  Xiangtan China"}]},{"given":"Yi","family":"Yuan","sequence":"additional","affiliation":[{"name":"College of Physical Education Hunan University of Science and Technology  Xiangtan China"}]}],"member":"311","published-online":{"date-parts":[[2023,8,22]]},"reference":[{"key":"e_1_2_9_2_1","doi-asserted-by":"publisher","DOI":"10.1186\/s12864-019-6413-7"},{"key":"e_1_2_9_3_1","doi-asserted-by":"publisher","DOI":"10.7717\/peerj-cs.623"},{"key":"e_1_2_9_4_1","first-page":"3097","article-title":"Spatio\u2010temporal vector of locally max pooled features for action recognition in videos","author":"Cosmin Duta I.","year":"2021","journal-title":"Proceedings of the lEEE Conference on Computer Vision and Pattern Recognition"},{"key":"e_1_2_9_5_1","doi-asserted-by":"publisher","DOI":"10.47852\/bonviewJCCE19522514205514"},{"key":"e_1_2_9_6_1","unstructured":"Deng J. Dong W. Socher R. Li L.\u2010J. Li K. &Fei\u2010Fei L.(2021).ImageNet: A large\u2010scale hierarchical image database. Computer vision and pattern recognition (CVPR) 2011 IEEE Conference on. IEEE. pp. 248\u2013255."},{"issue":"2","key":"e_1_2_9_7_1","first-page":"133","article-title":"Evaluation of the convincing ability through presentation skills of pre\u2010service management wizards using AI via T2 linguistic fuzzy logic.","volume":"2","author":"Dey P.","year":"2022","journal-title":"Engineering"},{"key":"e_1_2_9_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2016.2599174"},{"key":"e_1_2_9_9_1","first-page":"3468","article-title":"Spatiotemporal residual networks for video action recognition","author":"Feichtenhofer C.","year":"2020","journal-title":"NIPS"},{"key":"e_1_2_9_10_1","first-page":"2","article-title":"Convolutional two\u2010stream network fusion for video action recognition","author":"Feichtenhofer C.","year":"2016","journal-title":"CVPR"},{"key":"e_1_2_9_11_1","unstructured":"Gordon\u2010Rodriguez E. Loaiza\u2010Ganem G. Pleiss G. &Cunningham J. P.(2020).Uses and abuses of the cross\u2010entropy loss: Case studies in modern deep learning pp. 1\u201310."},{"key":"e_1_2_9_12_1","first-page":"770","article-title":"Deep residual learning for image recognition","author":"He K.","year":"2020","journal-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition"},{"key":"e_1_2_9_13_1","first-page":"3304","article-title":"Aggregating local descriptors into a compact image representation. Computer vision and pattern recognition (CVPR), 2010 EEE conference on","author":"Jegou H.","year":"2019","journal-title":"IEEE"},{"key":"e_1_2_9_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2012.59"},{"key":"e_1_2_9_15_1","doi-asserted-by":"publisher","DOI":"10.47852\/bonviewJCCE696205514"},{"key":"e_1_2_9_16_1","first-page":"1097","article-title":"Imagenet classification with deep convolutional neural networks","volume":"25","author":"Krizhevsky A.","year":"2012","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_9_17_1","unstructured":"Sharma S. Kiros R. &Salakhutdinov R.(2018).Action recognition using visual attention. InInternational Conference on Learning Representations (ICLR)."},{"key":"e_1_2_9_18_1","doi-asserted-by":"publisher","DOI":"10.1093\/oxfordhb\/9780199754281.013.38"},{"key":"e_1_2_9_19_1","volume-title":"Very deep convolutional networks for large\u2010scale image recognition","author":"Simonyan K.","year":"2020"},{"key":"e_1_2_9_20_1","first-page":"568","article-title":"Two\u2010stream convolutional networks for action recognition invideos","volume":"27","author":"Simonyan K.","year":"2021","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_9_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2021.3108792"},{"key":"e_1_2_9_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/JBHI.2022.3145999"},{"key":"e_1_2_9_23_1","first-page":"1","article-title":"Going deeper with convolutions J","author":"Szegedy C.","year":"2021","journal-title":"CVPR"},{"issue":"7","key":"e_1_2_9_24_1","first-page":"8","article-title":"C3D: Generic features for video analysis","volume":"2","author":"Tran D.","year":"2014","journal-title":"CoRR"},{"key":"e_1_2_9_25_1","first-page":"6450","article-title":"A closer look at spatiotemporal convolutions for action recognition","author":"Tran D.","year":"2018","journal-title":"cVPR"},{"key":"e_1_2_9_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2017.2712608"},{"key":"e_1_2_9_27_1","first-page":"4041","article-title":"Differential recurrent neural networks for action recognition","author":"Veeriah V.","year":"2015","journal-title":"IcCV"},{"key":"e_1_2_9_28_1","doi-asserted-by":"crossref","unstructured":"Wang H. Klaser A. Schmid C. et al. (2019).Action recognition by dense trajectories. InComputer Vision and Pattern Recognition (CVPR) 2011 IEEE Conference on. IEEE pp. 3169\u20133176.","DOI":"10.1109\/CVPR.2011.5995407"},{"key":"e_1_2_9_29_1","first-page":"131","volume-title":"\"Taylor expansion\" principles of parallel scientific computing: A first guide to numerical concepts and programming methods","author":"Weinzierl T.","year":"2022"},{"key":"e_1_2_9_30_1","first-page":"791","article-title":"Multi\u2010stream multi\u2010class fusion of deep networks for video classification","author":"Wu Z.","year":"2019","journal-title":"ACM on multimedia conference"},{"key":"e_1_2_9_31_1","first-page":"20","article-title":"Temporal segment networks: Towards good practices for deep action recognition","volume":"9912","author":"Wang L.","year":"2019","journal-title":"ECCV"},{"key":"e_1_2_9_32_1","first-page":"60.1","article-title":"Exploiting image\u2010trained CNN architectures for unconstrained video classification","author":"Zha S.","year":"2021","journal-title":"BMVC"},{"key":"e_1_2_9_33_1","first-page":"2718","article-title":"Real\u2010time action recognition with enhanced motion vector cnns","author":"Zhang B.","year":"2016","journal-title":"cVPR"},{"key":"e_1_2_9_34_1","first-page":"1991","article-title":"A key volume mining deep framework for action recognition","author":"Zhu W.","year":"2016","journal-title":"CVPR"}],"container-title":["Expert Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1111\/exsy.13422","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,19]],"date-time":"2025-08-19T07:01:00Z","timestamp":1755586860000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1111\/exsy.13422"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,22]]},"references-count":33,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1]]}},"alternative-id":["10.1111\/exsy.13422"],"URL":"https:\/\/doi.org\/10.1111\/exsy.13422","archive":["Portico"],"relation":{},"ISSN":["0266-4720","1468-0394"],"issn-type":[{"value":"0266-4720","type":"print"},{"value":"1468-0394","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,22]]},"assertion":[{"value":"2023-04-03","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-08-01","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-08-22","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"e13422"}}