{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,31]],"date-time":"2026-01-31T08:23:48Z","timestamp":1769847828612,"version":"3.49.0"},"reference-count":57,"publisher":"MDPI AG","issue":"14","license":[{"start":{"date-parts":[[2024,7,20]],"date-time":"2024-07-20T00:00:00Z","timestamp":1721433600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"China Academy of Electronics and Information Technology"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>3D object detection is a challenging and promising task for autonomous driving and robotics, benefiting significantly from multi-sensor fusion, such as LiDAR and cameras. Conventional methods for sensor fusion rely on a projection matrix to align the features from LiDAR and cameras. However, these methods often suffer from inadequate flexibility and robustness, leading to lower alignment accuracy under complex environmental conditions. Addressing these challenges, in this paper, we propose a novel Bidirectional Attention Fusion module, named BAFusion, which effectively fuses the information from LiDAR and cameras using cross-attention. Unlike the conventional methods, our BAFusion module can adaptively learn the cross-modal attention weights, making the approach more flexible and robust. Moreover, drawing inspiration from advanced attention optimization techniques in 2D vision, we developed the Cross Focused Linear Attention Fusion Layer (CFLAF Layer) and integrated it into our BAFusion pipeline. This layer optimizes the computational complexity of attention mechanisms and facilitates advanced interactions between image and point cloud data, showcasing a novel approach to addressing the challenges of cross-modal attention calculations. We evaluated our method on the KITTI dataset using various baseline networks, such as PointPillars, SECOND, and Part-A2, and demonstrated consistent improvements in 3D object detection performance over these baselines, especially for smaller objects like cyclists and pedestrians. Our approach achieves competitive results on the KITTI benchmark.<\/jats:p>","DOI":"10.3390\/s24144718","type":"journal-article","created":{"date-parts":[[2024,7,22]],"date-time":"2024-07-22T14:45:53Z","timestamp":1721659553000},"page":"4718","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["BAFusion: Bidirectional Attention Fusion for 3D Object Detection Based on LiDAR and Camera"],"prefix":"10.3390","volume":"24","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-4878-0573","authenticated-orcid":false,"given":"Min","family":"Liu","sequence":"first","affiliation":[{"name":"Institute of Advanced Technology, University of Science and Technology of China, Hefei 230088, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6932-4052","authenticated-orcid":false,"given":"Yuanjun","family":"Jia","sequence":"additional","affiliation":[{"name":"China Academy of Electronics and Information Technology, Beijing 100041, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4689-1542","authenticated-orcid":false,"given":"Youhao","family":"Lyu","sequence":"additional","affiliation":[{"name":"Institute of Advanced Technology, University of Science and Technology of China, Hefei 230088, China"}]},{"given":"Qi","family":"Dong","sequence":"additional","affiliation":[{"name":"China Academy of Electronics and Information Technology, Beijing 100041, China"}]},{"given":"Yanyu","family":"Yang","sequence":"additional","affiliation":[{"name":"China Academy of Electronics and Information Technology, Beijing 100041, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,7,20]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"108796","DOI":"10.1016\/j.patcog.2022.108796","article-title":"3D object detection for autonomous driving: A survey","volume":"130","author":"Qian","year":"2022","journal-title":"Pattern Recognit."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Wang, L., Li, R., Shi, H., Sun, J., Zhao, L., Seah, H.S., Quah, C.K., and Tandianus, B. (2019). Multi-channel convolutional neural network based 3D object detection for indoor robot environmental perception. Sensors, 19.","DOI":"10.3390\/s19040893"},{"key":"ref_3","unstructured":"Huang, K., Shi, B., Li, X., Li, X., Huang, S., and Li, Y. (2022). Multi-modal sensor fusion for auto driving perception: A survey. arXiv."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"2122","DOI":"10.1007\/s11263-023-01784-z","article-title":"Multi-modal 3D object detection in autonomous driving: A survey","volume":"131","author":"Wang","year":"2023","journal-title":"Int. J. Comput. Vis."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"722","DOI":"10.1109\/TITS.2020.3023541","article-title":"Deep learning for image and point cloud fusion in autonomous driving: A review","volume":"23","author":"Cui","year":"2021","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"De Silva, V., Roche, J., and Kondoz, A. (2018). Robust fusion of LiDAR and wide-angle camera data for autonomous mobile robots. Sensors, 18.","DOI":"10.3390\/s18082730"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Vora, S., Lang, A.H., Helou, B., and Beijbom, O. (2020, January 14\u201319). Pointpainting: Sequential fusion for 3D object detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition(CVPR), Virtual.","DOI":"10.1109\/CVPR42600.2020.00466"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Wang, C., Ma, C., Zhu, M., and Yang, X. (2021, January 19\u201325). Pointaugmenting: Cross-modal augmentation for 3D object detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition(CVPR), Virtual.","DOI":"10.1109\/CVPR46437.2021.01162"},{"key":"ref_9","unstructured":"Yin, T., Zhou, X., and Kr\u00e4henb\u00fchl, P. (2021, January 6\u201314). Multimodal virtual point 3D detection. Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Virtual."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Xu, D., Anguelov, D., and Jain, A. (2018, January 18\u201323). Pointfusion: Deep sensor fusion for 3D bounding box estimation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00033"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Yoo, J.H., Kim, Y., Kim, J., and Choi, J.W. (2020, January 23\u201328). 3D-cvf: Generating joint camera and lidar features using cross-view spatial feature fusion for 3D object detection. Proceedings of the European Conference on Computer Vision(ECCV), Glasgow, UK.","DOI":"10.1007\/978-3-030-58583-9_43"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Huang, T., Liu, Z., Chen, X., and Bai, X. (2020, January 23\u201328). EPNet: Enhancing Point Features with Image Semantics for 3D Object Detection. Proceedings of the European Conference on Computer Vision(ECCV), Glasgow, UK.","DOI":"10.1007\/978-3-030-58555-6_3"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Chen, X., Ma, H., Wan, J., Li, B., and Xia, T. (2017, January 21\u201326). Multi-view 3D object detection network for autonomous driving. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.691"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Ku, J., Mozifian, M., Lee, J., Harakeh, A., and Waslander, S.L. (2018, January 1\u20135). Joint 3D proposal generation and object detection from view aggregation. Proceedings of the IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain.","DOI":"10.1109\/IROS.2018.8594049"},{"key":"ref_15","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems(NeurIPS), Long Beach, CA, USA."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Bai, X., Hu, Z., Zhu, X., Huang, Q., Chen, Y., Fu, H., and Tai, C.L. (2022, January 18\u201324). TransFusion: Robust lidar-camera fusion for 3D object detection with transformers. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00116"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Chen, Z., Li, Z., Zhang, S., Fang, L., Jiang, Q., and Zhao, F. (2022, January 23\u201327). Deformable Feature Aggregation for Dynamic Multi-modal 3D Object Detection. Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-20074-8_36"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Yan, J., Liu, Y., Sun, J., Jia, F., Li, S., Wang, T., and Zhang, X. (2023, January 2\u20136). Cross modal transformer: Towards fast and robust 3D object detection. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Paris, France.","DOI":"10.1109\/ICCV51070.2023.01675"},{"key":"ref_19","unstructured":"Yang, Z., Chen, J., Miao, Z., Li, W., Zhu, X., and Zhang, L. (December, January 28). DeepInteraction: 3D object detection via modality interaction. Proceedings of the Advances in Neural Information Processing Systems(NeurIPS), New Orleans, LA, USA."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Pang, S., Morris, D., and Radha, H. (2020\u201324, January 24). CLOCs: Camera-LiDAR object candidates fusion for 3D object detection. Proceedings of the IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Virtual.","DOI":"10.1109\/IROS45743.2020.9341791"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Pang, S., Morris, D., and Radha, H. (2022, January 4\u20138). Fast-CLOCs: Fast camera-LiDAR object candidates fusion for 3D object detection. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA.","DOI":"10.1109\/WACV51458.2022.00380"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Han, D., Pan, X., Han, Y., Song, S., and Huang, G. (2023, January 2\u20136). Flatten transformer: Vision transformer using focused linear attention. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Paris, France.","DOI":"10.1109\/ICCV51070.2023.00548"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Geiger, A., Lenz, P., and Urtasun, R. (2012, January 16\u201321). Are we ready for autonomous driving? the kitti vision benchmark suite. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA.","DOI":"10.1109\/CVPR.2012.6248074"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., and Beijbom, O. (2019, January 16\u201320). PointPillars: Fast encoders for object detection from point clouds. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01298"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Yan, Y., Mao, Y., and Li, B. (2018). SECOND: Sparsely embedded convolutional detection. Sensors, 18.","DOI":"10.3390\/s18103337"},{"key":"ref_26","first-page":"2647","article-title":"From points to parts: 3D object detection from point cloud with part-aware and part-aggregation network","volume":"43","author":"Shi","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Chen, X., Kundu, K., Zhang, Z., Ma, H., Fidler, S., and Urtasun, R. (2016, January 27\u201330). Monocular 3D object detection for autonomous driving. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR.2016.236"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Li, P., Chen, X., and Shen, S. (2019, January 16\u201320). Stereo r-cnn based 3D object detection for autonomous driving. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00783"},{"key":"ref_29","unstructured":"Huang, J., Huang, G., Zhu, Z., Yun, Y., and Du, D. (2021). BEVDet: High-performance Multi-camera 3D Object Detection in Bird-Eye-View. arXiv."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Philion, J., and Fidler, S. (2020, January 23\u201328). Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3D. Proceedings of the European Conference on Computer Vision(ECCV), Glasgow, UK.","DOI":"10.1007\/978-3-030-58568-6_12"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Li, Z., Wang, W., Li, H., Xie, E., Sima, C., Lu, T., Qiao, Y., and Dai, J. (2022, January 23\u201327). Bevformer: Learning bird\u2019s-eye-view representation from multi-camera images via spatiotemporal transformers. Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-20077-9_1"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Liu, Y., Yan, J., Jia, F., Li, S., Gao, A., Wang, T., and Zhang, X. (2023, January 2\u20136). PETRv2: A unified framework for 3D perception from multi-camera images. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Paris, France.","DOI":"10.1109\/ICCV51070.2023.00302"},{"key":"ref_33","unstructured":"Qi, C.R., Su, H., Mo, K., and Guibas, L.J. (2017, January 21\u201326). PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA."},{"key":"ref_34","unstructured":"Qi, C.R., Yi, L., Su, H., and Guibas, L.J. (2017, January 4\u20139). PointNet++: Deep hierarchical feature learning on point sets in a metric space. Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Shi, S., Wang, X., and Li, H. (2019, January 15\u201320). PointRCNN: 3D object proposal generation and detection from point cloud. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00086"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Zhou, Y., and Tuzel, O. (2018, January 18\u201323). VoxelNet: End-to-end learning for point cloud based 3D object detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00472"},{"key":"ref_37","unstructured":"Shi, S., Guo, C., Jiang, L., Wang, Z., Shi, J., Wang, X., and Li, H. (2022, January 14\u201319). PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"18879","DOI":"10.1109\/JSEN.2023.3293515","article-title":"3ONet: 3-D Detector for Occluded Object Under Obstructed Conditions","volume":"23","author":"Hoang","year":"2023","journal-title":"IEEE Sens. J."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Qi, C.R., Liu, W., Wu, C., Su, H., and Guibas, L.J. (2018, January 18\u201323). Frustum PointNets for 3D object detection from rgb-d data. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00102"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Sindagi, V.A., Zhou, Y., and Tuzel, O. (2019, January 20\u201324). MVX-Net: Multimodal voxelnet for 3D object detection. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8794195"},{"key":"ref_41","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 11\u201317). Swin Transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Virtual.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. (2020, January 23\u201328). End-to-End object detection with transformers. Proceedings of the European Conference on Computer Vision(ECCV), Glasgow, UK.","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"ref_44","unstructured":"Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., and Luo, P. (2021, January 6\u201314). SegFormer: Simple and efficient design for semantic segmentation with transformers. Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Virtual."},{"key":"ref_45","unstructured":"Katharopoulos, A., Vyas, A., Pappas, N., and Fleuret, F. (2020, January 13\u201318). Transformers are RNNs: Fast autoregressive transformers with linear attention. Proceedings of the International Conference on Machine Learning (ICML), Virtual."},{"key":"ref_46","unstructured":"Qin, Z., Sun, W., Deng, H., Li, D., Wei, Y., Lv, B., Yan, J., Kong, L., and Zhong, Y. (2022). cosformer: Rethinking softmax in attention. arXiv."},{"key":"ref_47","unstructured":"Chen, X., Kundu, K., Zhu, Y., Berneshawi, A.G., Ma, H., Fidler, S., and Urtasun, R. (2015, January 7\u201312). 3D object proposals for accurate object class detection. Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Montreal, QC, Canada."},{"key":"ref_48","unstructured":"Contributors, M. (2024, March 24). MMDetection3D: OpenMMLab Next-Generation Platform for General 3D Object Detection. Available online: https:\/\/github.com\/open-mmlab\/mmdetection3d."},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Wang, C.Y., Liao, H.Y.M., Wu, Y.H., Chen, P.Y., Hsieh, J.W., and Yeh, I.H. (2020, January 14\u201319). CSPNet: A new backbone that can enhance learning capability of CNN. Proceedings of the IEEE\/CVF Conference on Computer vison and Pattern Recognition Workshop (CVPRW), Virtual.","DOI":"10.1109\/CVPRW50498.2020.00203"},{"key":"ref_51","unstructured":"Ge, Z., Liu, S., Wang, F., Li, Z., and Sun, J. (2021). Yolox: Exceeding yolo series in 2021. arXiv."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Xie, L., Xiang, C., Yu, Z., Xu, G., Yang, Z., Cai, D., and He, X. (2020, January 7\u201312). PI-RCNN: An efficient multi-sensor 3D object detector with point-based attentive cont-conv fusion module. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6933"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Liu, Z., Zhao, X., Huang, T., Hu, R., Zhou, Y., and Bai, X. (2020, January 7\u201312). Tanet: Robust 3D object detection from point clouds with triple attention. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6837"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Deng, J., Zhou, W., Zhang, Y., and Li, H. (2021). From Multi-View to Hollow-3D: Hallucinated Hollow-3D R-CNN for 3D Object Detection. IEEE Trans. Circuits Syst. Video Technol., 31.","DOI":"10.1109\/TCSVT.2021.3100848"},{"key":"ref_55","unstructured":"Liu, Z., Tang, H., Amini, A., Yang, X., Mao, H., Rus, D.L., and Han, S. (June, January 29). Bevfusion: Multi-task multi-sensor fusion with unified bird\u2019s-eye view representation. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), London, UK."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Qin, Y., Wang, C., Kang, Z., Ma, N., Li, Z., and Zhang, R. (2023, January 2\u20136). SupFusion: Supervised LiDAR-camera fusion for 3D object detection. Proceedings of the Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Paris, France.","DOI":"10.1109\/ICCV51070.2023.02012"},{"key":"ref_57","unstructured":"Cai, H., Gan, C., and Han, S. (2022). Efficientvit: Enhanced linear attention for high-resolution low-computation visual recognition. arXiv."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/14\/4718\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T15:20:27Z","timestamp":1760109627000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/24\/14\/4718"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,7,20]]},"references-count":57,"journal-issue":{"issue":"14","published-online":{"date-parts":[[2024,7]]}},"alternative-id":["s24144718"],"URL":"https:\/\/doi.org\/10.3390\/s24144718","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,7,20]]}}}