{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,28]],"date-time":"2026-02-28T02:54:36Z","timestamp":1772247276443,"version":"3.50.1"},"reference-count":25,"publisher":"MDPI AG","issue":"16","license":[{"start":{"date-parts":[[2023,8,18]],"date-time":"2023-08-18T00:00:00Z","timestamp":1692316800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"German Federal Ministry of Research and Education (BMBF)","award":["CoHMed\/PersonaMed-A 13FH5I06IA"],"award-info":[{"award-number":["CoHMed\/PersonaMed-A 13FH5I06IA"]}]},{"name":"German Federal Ministry of Research and Education (BMBF)","award":["AIDE-ASD FKZ 57656657"],"award-info":[{"award-number":["AIDE-ASD FKZ 57656657"]}]},{"DOI":"10.13039\/501100001655","name":"Ministerium f\u00fcr Wissenschaft, Forschung und Kunst (MWK) of Baden-Wuerttemberg, Germany","doi-asserted-by":"publisher","award":["CoHMed\/PersonaMed-A 13FH5I06IA"],"award-info":[{"award-number":["CoHMed\/PersonaMed-A 13FH5I06IA"]}],"id":[{"id":"10.13039\/501100001655","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001655","name":"Ministerium f\u00fcr Wissenschaft, Forschung und Kunst (MWK) of Baden-Wuerttemberg, Germany","doi-asserted-by":"publisher","award":["AIDE-ASD FKZ 57656657"],"award-info":[{"award-number":["AIDE-ASD FKZ 57656657"]}],"id":[{"id":"10.13039\/501100001655","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Minimal invasive surgery, more specifically laparoscopic surgery, is an active topic in the field of research. The collaboration between surgeons and new technologies aims to improve operation procedures as well as to ensure the safety of patients. An integral part of operating rooms modernization is the real-time communication between the surgeon and the data gathered using the numerous devices during surgery. A fundamental tool that can aid surgeons during laparoscopic surgery is the recognition of the different phases during an operation. Current research has shown a correlation between the surgical tools utilized and the present phase of surgery. To this end, a robust surgical tool classifier is desired for optimal performance. In this paper, a deep learning framework embedded with a custom attention module, the P-CSEM, has been proposed to refine the spatial features for surgical tool classification in laparoscopic surgery videos. This approach utilizes convolutional neural networks (CNNs) integrated with P-CSEM attention modules at different levels of the architecture for improved feature refinement. The model was trained and tested on the popular, publicly available Cholec80 database. Results showed that the attention integrated model achieved a mean average precision of 93.14%, and visualizations revealed the ability of the model to adhere more towards features of tool relevance. The proposed approach displays the benefits of integrating attention modules into surgical tool classification models for a more robust and precise detection.<\/jats:p>","DOI":"10.3390\/s23167257","type":"journal-article","created":{"date-parts":[[2023,8,18]],"date-time":"2023-08-18T10:28:48Z","timestamp":1692354528000},"page":"7257","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["P-CSEM: An Attention Module for Improved Laparoscopic Surgical Tool Detection"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1492-1121","authenticated-orcid":false,"given":"Herag","family":"Arabian","sequence":"first","affiliation":[{"name":"Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7436-0338","authenticated-orcid":false,"given":"Tamer","family":"Abdulbaki Alshirbaji","sequence":"additional","affiliation":[{"name":"Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany"},{"name":"Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103 Leipzig, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0209-3389","authenticated-orcid":false,"given":"Nour Aldeen","family":"Jalal","sequence":"additional","affiliation":[{"name":"Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany"},{"name":"Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103 Leipzig, Germany"}]},{"given":"Sabine","family":"Krueger-Ziolek","sequence":"additional","affiliation":[{"name":"Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4709-3817","authenticated-orcid":false,"given":"Knut","family":"Moeller","sequence":"additional","affiliation":[{"name":"Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany"},{"name":"Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand"},{"name":"Department of Microsystems Engineering, University of Freiburg, 79110 Freiburg, Germany"}]}],"member":"1968","published-online":{"date-parts":[[2023,8,18]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"691","DOI":"10.1038\/s41551-017-0132-7","article-title":"Surgical data science for next-generation interventions","volume":"1","author":"Vedula","year":"2017","journal-title":"Nat. Biomed. Eng."},{"key":"ref_2","unstructured":"Bodenstedt, S., Allan, M., Agustinos, A., Du, X., Garcia-Peraza-Herrera, L., Kenngott, H., Kurmann, T., M\u00fcller-Stich, B., Ourselin, S., and Pakhomov, D. (2018). Comparative Evaluation of Instrument Segmentation and Tracking Methods in Minimally Invasive Surgery. arXiv."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"66","DOI":"10.1016\/j.media.2017.01.007","article-title":"The status of augmented reality in laparoscopic surgery as of 2016","volume":"37","author":"Bernhardt","year":"2017","journal-title":"Med. Image Anal."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"86","DOI":"10.1109\/TMI.2016.2593957","article-title":"EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos","volume":"36","author":"Twinanda","year":"2016","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Yang, Y.-J., Vadivelu, A.N., Pilgrim, C.H.C., Kulic, D., and Abdi, E. (2021, January 1\u20135). A Novel Perception Framework for Automatic Laparoscope Zoom Factor Control Using Tool Geometry. Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Jalisco, Mexico.","DOI":"10.1109\/EMBC46164.2021.9629987"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2017). Squeeze-and-excitation networks. arXiv.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_7","first-page":"3","article-title":"CBAM: Convolutional Block Attention Module","volume":"Volume 11211","author":"Ferrari","year":"2018","journal-title":"Computer Vision\u2014ECCV"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition. arXiv.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., and Bernstein, M. (2015). ImageNet Large Scale Visual Recognition Challenge. arXiv.","DOI":"10.1007\/s11263-015-0816-y"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Alshirbaji, T.A., Jalal, N.A., Docherty, P.D., Neumuth, T., and M\u00f6ller, K. (2022). Robustness of Convolutional Neural Networks for Surgical Tool Classification in Laparoscopic Videos from Multiple Sources and of Multiple Types: A Systematic Evaluation. Electronics, 11.","DOI":"10.3390\/electronics11182849"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"115","DOI":"10.1186\/s40537-021-00509-8","article-title":"Towards more efficient CNN-based surgical tools classification using transfer learning","volume":"8","author":"Jaafari","year":"2021","journal-title":"J. Big Data"},{"key":"ref_12","unstructured":"Vardazaryan, A., Mutter, D., Marescaux, J., and Padoy, N. (2018). Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, Springer International Publishing."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Jalal, N.A., Alshirbaji, T.A., Docherty, P.D., Arabian, H., Neumuth, T., and Moeller, K. (IFAC-Pap, 2023). Surgical Tool Classification & Localisation Using Attention and Multi-feature Fusion Deep Learning Approach, IFAC-Pap, in press.","DOI":"10.1016\/j.ifacol.2023.10.473"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"102801","DOI":"10.1016\/j.bspc.2021.102801","article-title":"A deep learning spatial-temporal framework for detecting surgical tools in laparoscopic videos","volume":"68","author":"Alshirbaji","year":"2021","journal-title":"Biomed. Signal Process. Control"},{"key":"ref_15","unstructured":"Yang, Y., Zhao, Z., Shi, P., and Hu, S. (2021). Medical Image Understanding and Analysis, Springer International Publishing."},{"key":"ref_16","first-page":"406","article-title":"Attention-based spatial\u2013temporal neural network for accurate phase recognition in minimally invasive surgery: Feasibility and efficiency verification","volume":"9","author":"Shi","year":"2022","journal-title":"J. Comput. Des. Eng."},{"key":"ref_17","unstructured":"Czempiel, T., Paschali, M., Ostler, D., Kim, S.T., Busam, B., and Navab, N. (2021). Medical Image Computing and Computer Assisted Intervention\u2013MICCAI 2021, Springer International Publishing."},{"key":"ref_18","first-page":"186","article-title":"AGNet: Attention-Guided Network for Surgical Tool Presence Detection","volume":"Volume 10553","author":"Cardoso","year":"2017","journal-title":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"228853","DOI":"10.1109\/ACCESS.2020.3046258","article-title":"Real-Time Surgical Tool Detection in Minimally Invasive Surgery Based on Attention-Guided Convolutional Neural Network","volume":"8","author":"Shi","year":"2020","journal-title":"IEEE Access"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"676","DOI":"10.1515\/cdbme-2022-1172","article-title":"Attention Networks for Improving Surgical Tool Classification in Laparoscopic Videos","volume":"8","author":"Arabian","year":"2022","journal-title":"Curr. Dir. Biomed. Eng."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"548","DOI":"10.1515\/cdbme-2022-1140","article-title":"Analysing attention convolutional neural network for surgical tool localisation: A feasibility study","volume":"8","author":"Jalal","year":"2022","journal-title":"Curr. Dir. Biomed. Eng."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009, January 20\u201325). ImageNet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Ban, Y., Rosman, G., Ward, T., Hashimoto, D., Kondo, T., Iwaki, H., Meireles, O., and Rus, D. (June, January 30). Aggregating Long-Term Context for Learning Laparoscopic and Robot-Assisted Surgical Workflows. Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi\u2019an, China.","DOI":"10.1109\/ICRA48506.2021.9561770"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2016). Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. arXiv.","DOI":"10.1109\/ICCV.2017.74"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"102770","DOI":"10.1016\/j.media.2023.102770","article-title":"Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark","volume":"86","author":"Wagner","year":"2023","journal-title":"Med. Image Anal."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/16\/7257\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T20:36:59Z","timestamp":1760128619000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/16\/7257"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,18]]},"references-count":25,"journal-issue":{"issue":"16","published-online":{"date-parts":[[2023,8]]}},"alternative-id":["s23167257"],"URL":"https:\/\/doi.org\/10.3390\/s23167257","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,18]]}}}