{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:50:53Z","timestamp":1760147453395,"version":"build-2065373602"},"reference-count":31,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2023,2,4]],"date-time":"2023-02-04T00:00:00Z","timestamp":1675468800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100005632","name":"Polish National Center for Research and Development","doi-asserted-by":"publisher","award":["TANGO-IV-A\/0038\/2019-00"],"award-info":[{"award-number":["TANGO-IV-A\/0038\/2019-00"]}],"id":[{"id":"10.13039\/501100005632","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>In the five years between 2017 and 2022, IP video traffic tripled, according to Cisco. User-Generated Content (UGC) is mainly responsible for user-generated IP video traffic. The development of widely accessible knowledge and affordable equipment makes it possible to produce UGCs of quality that is practically indistinguishable from professional content, although at the beginning of UGC creation, this content was frequently characterized by amateur acquisition conditions and unprofessional processing. In this research, we focus only on UGC content, whose quality is obviously different from that of professional content. For the purpose of this paper, we refer to \u201cin the wild\u201d as a closely related idea to the general idea of UGC, which is its particular case. Studies on UGC recognition are scarce. According to research in the literature, there are currently no real operational algorithms that distinguish UGC content from other content. In this study, we demonstrate that the XGBoost machine learning algorithm (Extreme Gradient Boosting) can be used to develop a novel objective \u201cin the wild\u201d video content recognition model. The final model is trained and tested using video sequence databases with professional content and \u201cin the wild\u201d content. We have achieved a 0.916 accuracy value for our model. Due to the comparatively high accuracy of the model operation, a free version of its implementation is made accessible to the research community. It is provided via an easy-to-use Python package installable with Pip Installs Packages (pip).<\/jats:p>","DOI":"10.3390\/s23041769","type":"journal-article","created":{"date-parts":[[2023,2,6]],"date-time":"2023-02-06T02:06:43Z","timestamp":1675649203000},"page":"1769","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["\u201cIn the Wild\u201d Video Content as a Special Case of User Generated Content and a System for Its Recognition"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9123-1039","authenticated-orcid":false,"given":"Miko\u0142aj","family":"Leszczuk","sequence":"first","affiliation":[{"name":"AGH University of Science and Technology, 30-059 Krak\u00f3w, Poland"}]},{"given":"Marek","family":"Kobosko","sequence":"additional","affiliation":[{"name":"AGH University of Science and Technology, 30-059 Krak\u00f3w, Poland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5671-3726","authenticated-orcid":false,"given":"Jakub","family":"Nawa\u0142a","sequence":"additional","affiliation":[{"name":"Department of Electrical Electronic Engineering, University of Bristol, Bristol BS8 1QU, UK"}]},{"given":"Filip","family":"Korus","sequence":"additional","affiliation":[{"name":"AGH University of Science and Technology, 30-059 Krak\u00f3w, Poland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7633-8663","authenticated-orcid":false,"given":"Micha\u0142","family":"Grega","sequence":"additional","affiliation":[{"name":"AGH University of Science and Technology, 30-059 Krak\u00f3w, Poland"}]}],"member":"1968","published-online":{"date-parts":[[2023,2,4]]},"reference":[{"key":"ref_1","unstructured":"Cisco (2020). Cisco Annual Internet Report (2018\u20132023) White Paper, Cisco."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1525\/cmr.2015.57.4.43","article-title":"CGIP: Managing consumer-generated intellectual property","volume":"57","author":"Berthon","year":"2015","journal-title":"Calif. Manag. Rev."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"10","DOI":"10.1109\/MPRV.2008.85","article-title":"User-generated content","volume":"7","author":"Krumm","year":"2008","journal-title":"IEEE Pervasive Comput."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"113684","DOI":"10.1016\/j.dss.2021.113684","article-title":"Understanding the impacts of user-and marketer-generated content on free digital content consumption","volume":"154","author":"Zhao","year":"2022","journal-title":"Decis. Support Syst."},{"key":"ref_5","first-page":"2015","article-title":"Swiss TV Station Replaces Cameras with iPhones and Selfie Sticks","volume":"1","author":"Zhang","year":"2015","journal-title":"Downloaded Oct."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Leszczuk, M., Janowski, L., Nawa\u0142a, J., and Grega, M. (2022, January 28\u201330). User-Generated Content (UGC)\/In-The-Wild Video Content Recognition. Proceedings of the Asian Conference on Intelligent Information and Database Systems, Ho Chi Minh City, Vietnam.","DOI":"10.1007\/978-3-031-21967-2_29"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Karadimce, A., and Davcev, D.P. (2018). Towards Improved Model for User Satisfaction Assessment of Multimedia Cloud Services. J. Mob. Multimed., 157\u2013196.","DOI":"10.13052\/jmm1550-4646.1422"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Li, D., Jiang, T., and Jiang, M. (2019, January 21\u201325). Quality assessment of in-the-wild videos. Proceedings of the 27th ACM International Conference on Multimedia (MM \u201919), Nice, France.","DOI":"10.1145\/3343031.3351028"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Ying, Z., Mandal, M., Ghadiyaram, D., and Bovik, A. (2021, January 20\u201325). Patch-VQ: \u2018Patching Up\u2019 the Video Quality Problem. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01380"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Tu, Z., Chen, C.J., Wang, Y., Birkbeck, N., Adsumilli, B., and Bovik, A.C. (2021, January 19\u201322). Video Quality Assessment of User Generated Content: A Benchmark Study and a New Model. Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA.","DOI":"10.1109\/ICIP42928.2021.9506189"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Yi, F., Chen, M., Sun, W., Min, X., Tian, Y., and Zhai, G. (2021, January 19\u201322). Attention Based Network For No-Reference UGC Video Quality Assessment. Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA.","DOI":"10.1109\/ICIP42928.2021.9506420"},{"key":"ref_12","unstructured":"Marc Egger, A., and Schoder, D. (2015, January 26\u201329). Who Are We Listening To? Detecting User-Generated Content (Ugc) on the Web. Proceedings of the European Conference on Information Systems (ECIS 2015), M\u00fcnster, Germany."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Guo, J., Gurrin, C., and Lao, S. (2013, January 16\u201320). Who produced this video, amateur or professional?. Proceedings of the 3rd ACM Conference on INTERNATIONAL Conference on Multimedia Retrieval, Dallas, TX, USA.","DOI":"10.1145\/2461466.2461509"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Guo, J., and Gurrin, C. (2012, January 14). Short user-generated videos classification using accompanied audio categories. Proceedings of the 2012 ACM International Workshop on Audio and Multimedia Methods for Large-Scale Video Analysis, Lisboa, Portugal.","DOI":"10.1145\/2390214.2390220"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"10745","DOI":"10.1007\/s11042-014-2229-2","article-title":"Recent developments in visual quality monitoring by key performance indicators","volume":"75","author":"Leszczuk","year":"2016","journal-title":"Multimed. Tools Appl."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Nawa\u0142a, J., Leszczuk, M., Zajdel, M., and Baran, R. (2016). Software package for measurement of quality indicators working in no-reference model. Multimed. Tools Appl., 1\u20137.","DOI":"10.1007\/s11042-016-4195-3"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Romaniak, P., Janowski, L., Leszczuk, M., and Papir, Z. (2012, January 14\u201317). Perceptual quality assessment for H.264\/AVC compression. Proceedings of the 2012 IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, USA.","DOI":"10.1109\/CCNC.2012.6181021"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"787","DOI":"10.1007\/s11042-011-0946-3","article-title":"Framework for the integrated video quality assessment","volume":"61","author":"Mu","year":"2012","journal-title":"Multimed. Tools Appl."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Dziech, A., and Czy\u017cewski, A. (2011). Proceedings of the Multimedia Communications, Services and Security, Springer.","DOI":"10.1007\/978-3-642-21512-4"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Janowski, L., and Papir, Z. (2009, January 5\u20137). Modeling subjective tests of quality of experience with a Generalized Linear Model. Proceedings of the 2009 International Workshop on Quality of Multimedia Experience, Lippstadt, Germany.","DOI":"10.1109\/QOMEX.2009.5246979"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Chen, T., and Guestrin, C. (2016, January 13\u201317). Xgboost: A scalable tree boosting system. Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.","DOI":"10.1145\/2939672.2939785"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"18473","DOI":"10.1007\/s00521-022-07454-4","article-title":"Adam or Eve? Automatic users\u2019 gender classification via gestures analysis on touch devices","volume":"34","author":"Guarino","year":"2022","journal-title":"Neural Comput. Appl."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Xu, Z., Hu, J., and Deng, W. (2016, January 11\u201315). Recurrent convolutional neural network for video classification. Proceedings of the 2016 IEEE International Conference on Multimedia and Expo (ICME), Seattle, WA, USA.","DOI":"10.1109\/ICME.2016.7552971"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Seeland M, M.P. (2021). Multi-view classification with convolutional neural networks. PLoS ONE, 16.","DOI":"10.1371\/journal.pone.0245230"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"2000","DOI":"10.1109\/TMM.2018.2794265","article-title":"Summarization of user-generated sports video by using deep action recognition features","volume":"20","author":"Nakashima","year":"2018","journal-title":"IEEE Trans. Multimed."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Psallidas, T., Koromilas, P., Giannakopoulos, T., and Spyrou, E. (2021). Multimodal summarization of user-generated videos. Appl. Sci., 11.","DOI":"10.3390\/app11115260"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"3073","DOI":"10.1109\/TIP.2016.2562513","article-title":"CVD2014\u2014A Database for Evaluating No-Reference Video Quality Assessment Algorithms","volume":"25","author":"Nuutinen","year":"2016","journal-title":"IEEE Trans. Image Process."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"2061","DOI":"10.1109\/TCSVT.2017.2707479","article-title":"In-Capture Mobile Video Distortions: A Study of Subjective Behavior and Objective Algorithms","volume":"28","author":"Ghadiyaram","year":"2018","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Hosu, V., Hahn, F., Jenadeleh, M., Lin, H., Men, H., Szir\u00e1nyi, T., Li, S., and Saupe, D. (June, January 31). The Konstanz natural video database (KoNViD-1k). Proceedings of the 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX), Erfurt, Germany.","DOI":"10.1109\/QoMEX.2017.7965673"},{"key":"ref_30","unstructured":"Pinson, M.H., Boyd, K.S., Hooker, J., and Muntean, K. (February, January 30). How to choose video sequences for video quality assessment. Proceedings of the Seventh International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM-2013), Scottsdale, AZ, USA."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Badiola, A., Zorrilla, A.M., Garcia-Zapirain Soto, B., Grega, M., Leszczuk, M., and Sma\u00efli, K. (2020, January 8\u20139). Evaluation of Improved Components of AMIS Project for Speech Recognition, Machine Translation and Video\/Audio\/Text Summarization. Proceedings of the International Conference on Multimedia Communications, Services and Security, Krak\u00f3w, Poland.","DOI":"10.1007\/978-3-030-59000-0_24"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/4\/1769\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:24:23Z","timestamp":1760120663000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/4\/1769"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,4]]},"references-count":31,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2023,2]]}},"alternative-id":["s23041769"],"URL":"https:\/\/doi.org\/10.3390\/s23041769","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2023,2,4]]}}}