Skip to main content
Log in

Camonas: neural architecture search for enhanced camouflaged object detection

  • Research
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Camouflaged Object Detection (COD) aims to locate and segment objects that blend into their surroundings, presenting challenges due to weak edge cues and ill-defined boundaries. Traditional COD models rely on hand-designed architectures and multi-scale feature fusion, which are often guided by intuition rather than systematic search. This paper introduces CamoNAS, a frequency-aware multi-resolution Neural Architecture Search (NAS) framework for COD. CamoNAS automatically searches both cell-level operations and network-level downsampling paths, forming a hierarchical search space tailored to detect camouflaged objects. Additionally, it adopts an RGB frequency dual-stream architecture, where a learnable wavelet transform complements the RGB spatial stream. CamoNAS achieves state-of-the-art performance on four COD benchmarks (CAMO, COD10K, NC4K, CHAMELEON), highlighting the effectiveness of NAS for COD. Our code is available at https://github.com/rendaweiSIMIT/CamoNAS.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from €37.37 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price includes VAT (Netherlands)

Instant access to the full article PDF.

Fig. 1
The alternative text for this image may have been generated using AI.
Fig. 2
The alternative text for this image may have been generated using AI.
Fig. 3
The alternative text for this image may have been generated using AI.
Fig. 4
The alternative text for this image may have been generated using AI.

Similar content being viewed by others

Data Availability

This study uses only publicly available benchmark datasets for camouflaged object detection, including CAMO, COD10K, CHAMELEON, and NC4K. These datasets can be downloaded from the corresponding project websites or repositories published by the original authors. No new data were generated in this work, and all experimental data are derived from these existing public datasets.

References

  1. Fan, D.-P., Ji, G.-P., Cheng, M.-M., Shao, L.: Concealed object detection. IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6024–6042 (2021)

    Article  Google Scholar 

  2. Fan, D.-P., Ji, G.-P., Sun, G., Cheng, M.-M., Shen, J., Shao, L.: Camouflaged object detection. In: CVPR, pp. 2777–2787 (2020)

  3. Yang, F., Zhai, Q., Li, X., Huang, R., Luo, A., Cheng, H., Fan, D.-P.: Uncertainty-guided transformer reasoning for camouflaged object detection. In: ICCV, pp. 4146–4155 (2021)

  4. Fan, D.-P., Ji, G.-P., Zhou, T., Chen, G., Fu, H., Shen, J., Shao, L.: Pranet: parallel reverse attention network for polyp segmentation. In: MICCAI, pp. 263–273. Springer (2020)

  5. Zhang, R., Li, G., Li, Z., Cui, S., Qian, D., Yu, Y.: Adaptive context selection for polyp segmentation. In: MICCAI, pp. 253–262. Springer (2020)

  6. Mei, H., Yang, X., Wang, Y., Liu, Y., He, S., Zhang, Q., Wei, X., Lau, R.W.: Don’t hit me! glass detection in real-world scenes. In: CVPR, pp. 3687–3696 (2020)

  7. Xie, E., Wang, W., Wang, W., Ding, M., Shen, C., Luo, P.: Segmenting transparent objects in the wild. In: ECCV. Springer (2020)

  8. Zhang, M., Xu, S., Piao, Y., Shi, D., Lin, S., Lu, H.: Preynet: preying on camouflaged objects. In: ACM MM, pp. 5323–5332 (2022)

  9. Mei, H., Ji, G.-P., Wei, Z., Yang, X., Wei, X., Fan, D.-P.: Camouflaged object segmentation with distraction mining. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8772–8781 (2021)

  10. Pang, Y., Zhao, X., Xiang, T.-Z., Zhang, L., Lu, H.: Zoom in and out: a mixed-scale triplet network for camouflaged object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2160–2170 (2022)

  11. Cheng, A., Wu, S., Liu, X., Lu, H.: Enhancing concealed object detection in active thz security images with adaptation-yolo. Sci. Rep. 15(1), 2735 (2025)

    Article  Google Scholar 

  12. Kowalski, M., Mierzejewski, K., Pałys, T.: Bi-spectral concealed object detection with attention-based fusion of passive thermal infrared and terahertz imaging. Eng. Appl. Artif. Intell. 158, 111462 (2025)

    Article  Google Scholar 

  13. Liu, Z., Zhang, Z., Tan, Y., Wu, W.: Boosting camouflaged object detection with dual-task interactive transformer. In: 2022 26th International Conference on Pattern Recognition (ICPR), pp. 140–146. IEEE (2022)

  14. Zhong, Y., Li, B., Tang, L., Kuang, S., Wu, S., Ding, S.: Detecting camouflaged object in frequency domain. In: CVPR, pp. 4504–4513 (2022)

  15. He, C., Li, K., Zhang, Y., Tang, L., Zhang, Y., Guo, Z., Li, X.: Camouflaged object detection with feature decomposition and edge reconstruction. In: CVPR (2023)

  16. Zhai, Q., Li, X., Yang, F., Chen, C., Cheng, H., Fan, D.-P.: Mutual graph learning for camouflaged object detection. In: CVPR, pp. 12997–13007 (2021)

  17. Sun, Y., Wang, S., Chen, C., Xiang, T.-Z.: Boundary-guided camouflaged object detection. Preprint at arXiv:2207.00794 (2022)

  18. Zhai, W., Cao, Y., Zhang, J., Zha, Z.-J.: Exploring figure-ground assignment mechanism in perceptual organization. In: NIPS, vol. 35 (2022)

  19. Lv, Y., Zhang, J., Dai, Y., Li, A., Liu, B., Barnes, N., Fan, D.-P.: Simultaneously localize, segment and rank the camouflaged objects. In: CVPR, pp. 11591–11601 (2021)

  20. Jia, Q., Yao, S., Liu, Y., Fan, X., Liu, R., Luo, Z.: Segment, magnify and reiterate: Detecting camouflaged objects the hard way. In: CVPR, pp. 4713–4722 (2022)

  21. Zhu, H., Li, P., Xie, H., Yan, X., Liang, D., Chen, D., Wei, M., Qin, J.: I can find you! boundary-guided separated attention network for camouflaged object detection. In: AAAI, vol. 36, pp. 3608–3616 (2022)

  22. Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20(55), 1–21 (2019)

    MathSciNet  Google Scholar 

  23. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. Preprint at arXiv:1611.01578 (2016)

  24. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8697–8710 (2018)

  25. Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L.-J., Fei-Fei, L., Yuille, A., Huang, J., Murphy, K.: Progressive neural architecture search. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 19–34 (2018)

  26. Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 4780–4789 (2019)

  27. Pham, H., Guan, M., Zoph, B., Le, Q., Dean, J.: Efficient neural architecture search via parameters sharing. In: International Conference on Machine Learning, pp. 4095–4104. PMLR (2018)

  28. Liu, C., Chen, L.-C., Schroff, F., Adam, H., Hua, W., Yuille, A.L., Fei-Fei, L.: Auto-deeplab: hierarchical neural architecture search for semantic image segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 82–92 (2019)

  29. Zhang, Y., Qiu, Z., Liu, J., Yao, T., Liu, D., Mei, T.: Customizable architecture search for semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11641–11650 (2019)

  30. Nekrasov, V., Chen, H., Shen, C., Reid, I.: Fast neural architecture search of compact semantic segmentation models via auxiliary cells. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9126–9135 (2019)

  31. Lin, P., Sun, P., Cheng, G., Xie, S., Li, X., Shi, J.: Graph-guided architecture search for real-time semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4203–4212 (2020)

  32. Liu, H., Simonyan, K., Yang, Y.: Darts: differentiable architecture search. Preprint at arXiv:1806.09055 (2018)

  33. Li, X., Fu, K., Zhao, Q.: Camouflaged object detection via neural architecture search. In: ICASSP 2025–2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5. IEEE (2025)

  34. Liang, Y., Qin, G., Sun, M., Wang, X., Yan, J., Zhang, Z.: A systematic review of image-level camouflaged object detection with deep learning. Neurocomputing 566, 127050 (2024)

    Article  Google Scholar 

  35. Cong, R., Sun, M., Zhang, S., Zhou, X., Zhang, W., Zhao, Y.: Frequency perception network for camouflaged object detection. In: Proceedings of the 31st ACM International Conference on Multimedia, pp. 1179–1189 (2023)

  36. Xie, C., Xia, C., Yu, T., Li, J.: Frequency representation integration for camouflaged object detection. In: Proceedings of the 31st ACM International Conference on Multimedia, pp. 1789–1797 (2023)

  37. Zhou, X., Yang, C., Zhao, H., Yu, W.: Low-rank modeling and its applications in image analysis. ACM Computing Surveys (CSUR) 47(2), 1–33 (2014)

    Article  Google Scholar 

  38. Bi, H., Zhang, C., Wang, K., Tong, J., Zheng, F.: Rethinking camouflaged object detection: models and datasets. IEEE Trans. Circuits Syst. Video Technol. 32(9), 5708–5724 (2021)

    Article  Google Scholar 

  39. Mondal, A., Ghosh, S., Ghosh, A.: Partially camouflaged object tracking using modified probabilistic neural network and fuzzy energy based active contour. Int. J. Comput. Vision 122(1), 116–148 (2017)

    Article  MathSciNet  Google Scholar 

  40. Li, S., Florencio, D., Li, W., Zhao, Y., Cook, C.: A fusion framework for camouflaged moving foreground detection in the wavelet domain. IEEE Trans. Image Process. 27(8), 3918–3930 (2018)

    Article  MathSciNet  Google Scholar 

  41. Le, T.-N., Nguyen, T.V., Nie, Z., Tran, M.-T., Sugimoto, A.: Anabranch network for camouflaged object segmentation. Comput. Vis. Image Underst. 184, 45–56 (2019)

    Article  Google Scholar 

  42. Yin, B., Zhang, X., Fan, D.-P., Jiao, S., Cheng, M.-M., Van Gool, L., Hou, Q.: Camoformer: masked separable attention for camouflaged object detection. IEEE Trans. Pattern Anal. Mach. Intell. 42, 10362–10374 (2024)

    Article  Google Scholar 

  43. Yue, G., Jiao, G., Li, C., Xiang, J.: When cnn meet with vit: decision-level feature fusion for camouflaged object detection. Vis. Comput. 41(6), 3957–3972 (2025)

    Article  Google Scholar 

  44. Ge, Y., Ren, J., Zhang, C., He, M., Bi, H., Zhang, Q.: Feature-aware and iterative refinement network for camouflaged object detection. Vis. Comput. 41(7), 4741–4758 (2025)

    Article  Google Scholar 

  45. Skurowski, P., Abdulameer, H., Błaszczyk, J., Depta, T., Kornacki, A., Kozieł, P.: Animal camouflage analysis: chameleon database. Unpublished manuscript 2(6), 7 (2018)

  46. Lin, J., He, Z., Lau, R.W.: Rich context aggregation with reflection prior for glass surface detection. In: CVPR, pp. 13415–13424 (2021)

  47. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 234–241. Springer (2015)

  48. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)

    Article  Google Scholar 

  49. Kirillov, A., Wu, Y., He, K., Girshick, R.: Pointrend: Image segmentation as rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9799–9808 (2020)

  50. Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)

  51. Zhang, M., Tian, X.: Transformer architecture based on mutual attention for image-anomaly detection. Virtual Real. Intell. Hardw. 5(1), 57–67 (2023). https://doi.org/10.1016/j.vrih.2022.07.006

    Article  MathSciNet  Google Scholar 

  52. Lin, C., Zou, C., Xu, H.: Scnet: a dual-branch network for strong noisy image denoising based on swin transformer and convnext. Comp. Animation Virtual Worlds 36(3), 70030 (2025). https://doi.org/10.1002/cav.70030

    Article  Google Scholar 

  53. Zhang, M., Zhou, J., Miao, T., Zhao, Y., Si, X., Zhang, J.: Joint-learning: a robust segmentation method for 3d point clouds under label noise. Comput. Animation Virtual Worlds 36(3), 70038 (2025). https://doi.org/10.1002/cav.70038

    Article  Google Scholar 

  54. Yao, L., Xu, H., Zhang, W., Liang, X., Li, Z.: Sm-nas: Structural-to-modular neural architecture search for object detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12661–12668 (2020)

  55. Chen, Y., Yang, T., Zhang, X., Meng, G., Xiao, X., Sun, J.: Detnas: Backbone search for object detection. Advances in neural information processing systems 32 (2019)

  56. Guo, J., Han, K., Wang, Y., Zhang, C., Yang, Z., Wu, H., Chen, X., Xu, C.: Hit-detector: hierarchical trinity architecture search for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11405–11414 (2020)

  57. Wang, N., Gao, Y., Chen, H., Wang, P., Tian, Z., Shen, C., Zhang, Y.: Nas-fcos: fast neural architecture search for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11943–11951 (2020)

  58. Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y.L., Tan, J., Le, Q.V., Kurakin, A.: Large-scale evolution of image classifiers. In: International Conference on Machine Learning, pp. 2902–2911. PMLR (2017)

  59. Xu, Y., Xie, L., Zhang, X., Chen, X., Qi, G.-J., Tian, Q., Xiong, H.: Pc-darts: Partial channel connections for memory-efficient architecture search. Preprint at arXiv:1907.05737 (2019)

  60. Chen, X., Xie, L., Wu, J., Tian, Q.: Progressive differentiable architecture search: bridging the depth gap between search and evaluation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1294–1303 (2019)

  61. Chu, X., Wang, X., Zhang, B., Lu, S., Wei, X., Yan, J.: Darts-: robustly stepping out of performance collapse without indicators. Preprint at arXiv:2009.01027 (2020)

  62. Chen, Z., Qiu, G., Li, P., Zhu, L., Yang, X., Sheng, B.: Mngnas: distilling adaptive combination of multiple searched networks for one-shot neural architecture search. IEEE Trans. Pattern Anal. Mach. Intell. 45(11), 13489–13508 (2023). https://doi.org/10.1109/TPAMI.2023.3293885

    Article  Google Scholar 

  63. Ghiasi, G., Lin, T.-Y., Le, Q.V.: Nas-fpn: learning scalable feature pyramid architecture for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7036–7045 (2019)

  64. Tan, M., Chen, B., Pang, R., Vasudevan, V., Sandler, M., Howard, A., Le, Q.V.: Mnasnet: platform-aware neural architecture search for mobile. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2820–2828 (2019)

  65. Tan, M., Pang, R., Le, Q.V.: Efficientdet: scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10781–10790 (2020)

  66. Cai, H., Zhu, L., Han, S.: Proxylessnas: Direct neural architecture search on target task and hardware. Preprint at arXiv:1812.00332 (2018)

  67. Lin, X., Sun, S., Huang, W., Sheng, B., Li, P., Feng, D.D.: Eapt: efficient attention pyramid transformer for image processing. IEEE Trans. Multimedia 25, 50–61 (2023). https://doi.org/10.1109/TMM.2021.3120873

    Article  Google Scholar 

  68. Liu, C., Chen, L.-C., Schroff, F., Adam, H., Hua, W., Yuille, A.L., Fei-Fei, L.: Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 82–92 (2019)

  69. Cheng, X., Zhong, Y., Harandi, M., Dai, Y., Chang, X., Li, H., Drummond, T., Ge, Z.: Hierarchical neural architecture search for deep stereo matching. Adv. Neural. Inf. Process. Syst. 33, 22158–22169 (2020)

    Google Scholar 

  70. Na, B., Mok, J., Park, S., Lee, D., Choe, H., Yoon, S.: Autosnn: Towards energy-efficient spiking neural networks. In: International Conference on Machine Learning, pp. 16253–16269. PMLR (2022)

  71. Qin, X., Zhang, Z., Huang, C., Gao, C., Dehghan, M., Jagersand, M.: Basnet: boundary-aware salient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7479–7489 (2019)

  72. Wei, J., Wang, S., Huang, Q.: F\(^3\)net: fusion, feedback and focus for salient object detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12321–12328 (2020)

  73. Huang, Z., Zhang, Z., Lan, C., Zha, Z.-J., Lu, Y., Guo, B.: Adaptive frequency filters as efficient global token mixers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6049–6059 (2023)

  74. He, S., Lin, G., Li, T., Chen, Y.: Frequency-domain fusion transformer for image inpainting. Preprint at arXiv:2506.18437 (2025)

  75. Geng, Z., Guo, M.-H., Chen, H., Li, X., Wei, K., Lin, Z.: Is attention better than matrix decomposition? Preprint at arXiv:2109.04553 (2021)

  76. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.-Y., et al.: Segment anything. Preprint at arXiv:2304.02643 (2023)

  77. Li, A., Zhang, J., Lv, Y., Liu, B., Zhang, T., Dai, Y.: Uncertainty-aware joint salient object and camouflaged object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10071–10081 (2021)

  78. Ji, G.-P., Fan, D.-P., Chou, Y.-C., Dai, D., Liniger, A., Van Gool, L.: Deep gradient learning for efficient camouflaged object detection. Machine Intell. Res. 20(1), 92–108 (2023)

  79. Wu, Z., Paudel, D.P., Fan, D.-P., Wang, J., Wang, S., Demonceaux, C., Timofte, R., Van Gool, L.: Source-free depth for object pop-out. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1032–1042 (2023)

  80. He, R., Dong, Q., Lin, J., Lau, R.W.: Weakly-supervised camouflaged object detection with scribble annotations. In: AAAI (2023)

  81. He, C., Li, K., Zhang, Y., Xu, G., Tang, L., Zhang, Y., Guo, Z., Li, X.: Weakly-supervised concealed object segmentation with sam-based pseudo labeling and multi-scale feature grouping. Adv. Neural. Inf. Process. Syst. 36, 30726–30737 (2023)

    Google Scholar 

  82. He, C., Li, K., Zhang, Y., Zhang, Y., Guo, Z., Li, X., Danelljan, M., Yu, F.: Strategic preys make acute predators: Enhancing camouflaged object detectors by generating camouflaged objects. Preprint at arXiv:2308.03166 (2023)

  83. Zhou, X., Wu, Z., Cong, R.: Decoupling and integration network for camouflaged object detection. IEEE Trans. Multimedia 26, 7114–7129 (2024)

    Article  Google Scholar 

  84. Pang, Y., Zhao, X., Xiang, T.-Z., Zhang, L., Lu, H.: Zoomnext: a unified collaborative pyramid network for camouflaged object detection. IEEE Trans. Pattern Anal. Mach. Intell. 46(12), 9205–9220 (2024)

    Article  Google Scholar 

  85. Hu, J., Lin, J., Gong, S., Cai, W.: Relax image-specific prompt requirement in sam: A single generic prompt for segmenting camouflaged objects. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 12511–12518 (2024)

  86. Yan, W., Chen, L., Kou, H., Zhang, S., Zhang, Y., Cao, L.: Ucod-dpl: Unsupervised camouflaged object detection via dynamic pseudo-label learning. In: Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 30365–30375 (2025)

  87. Gupta, A., Jerripothula, K.R., Tillo, T.: Circod: Co-saliency inspired referring camouflaged object discovery. In: 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 8313–8323. IEEE (2025)

Download references

Author information

Authors and Affiliations

Authors

Contributions

D.R. conceived the main idea of CamoNAS, designed the overall framework, implemented the method, and conducted the main experiments and analyses. Y.Z. contributed to the design of the NAS search space and frequency branch, assisted with implementation, and performed ablation studies and result visualization. H.T. provided supervision on methodology, helped refine the problem formulation, and substantially revised and edited the manuscript. Q.Z. contributed to the neural architecture search strategy, experimental setup, and code verification, and helped improve the presentation of the experimental results. J.L. supervised the project, provided overall guidance on research direction, and critically reviewed and revised the manuscript for important intellectual content. All authors reviewed and approved the final version of the manuscript.

Corresponding author

Correspondence to Dawei Ren.

Ethics declarations

Conflict of interest

The authors declare no Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ren, D., Zhang, Y., Tang, H. et al. Camonas: neural architecture search for enhanced camouflaged object detection. Vis Comput 42, 194 (2026). https://doi.org/10.1007/s00371-026-04411-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • DOI: https://doi.org/10.1007/s00371-026-04411-3

Keywords

Profiles

  1. Dawei Ren
  2. Yan Zhang