Skip to main content

Fast Context-Based Low-Light Image Enhancement via Neural Implicit Representations

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Current deep learning-based low-light image enhancement methods often struggle with high-resolution images, and fail to meet the practical demands of visual perception across diverse and unseen scenarios. In this paper, we introduce a novel approach termed CoLIE, which redefines the enhancement process through mapping the 2D coordinates of an underexposed image to its illumination component, conditioned on local context. We propose a reconstruction of enhanced-light images within the HSV space utilizing an implicit neural function combined with an embedded guided filter, thereby significantly reducing computational overhead. Moreover, we introduce a single image-based training loss function to enhance the model’s adaptability to various scenes, further enhancing its practical applicability. Through rigorous evaluations, we analyze the properties of our proposed framework, demonstrating its superiority in both image quality and scene adaptability. Furthermore, our evaluation extends to applications in downstream tasks within low-light scenarios, underscoring the practical utility of CoLIE. The source code is available at https://github.com/ctom2/colie.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+
from €37.37 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Chapter
EUR 29.95
Price includes VAT (Netherlands)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 60.98
Price includes VAT (Netherlands)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 80.65
Price includes VAT (Netherlands)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Badizadegan, K., Wheeler, H.E., Fujinaga, Y., Lencer, W.I.: Trafficking of cholera toxin-ganglioside GM1 complex into Golgi and induction of toxicity depend on actin cytoskeleton. Am. J. Physiol.-Cell Physiol. 287(5), C1453–C1462 (2004). https://doi.org/10.1152/ajpcell.00189.2004. pMID: 15294854

  2. Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input/output image pairs. In: The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition (2011)

    Google Scholar 

  3. Cai, J., Gu, S., Zhang, L.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27(4), 2049–2062 (2018)

    Article  MathSciNet  Google Scholar 

  4. Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3291–3300 (2018)

    Google Scholar 

  5. Chen, W., Wenjing, W., Wenhan, Y., Jiaying, L.: Deep retinex decomposition for low-light enhancement. In: British Machine Vision Conference (2018)

    Google Scholar 

  6. Chen, Y., Liu, S., Wang, X.: Learning continuous image representation with local implicit image function. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8628–8638 (2021)

    Google Scholar 

  7. Deng, B., et al.: NASA neural articulated shape approximation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12352, pp. 612–628. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58571-6_36

    Chapter  Google Scholar 

  8. Dong, X., et al.: Abandoning the bayer-filter to see in the dark. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17431–17440 (2022)

    Google Scholar 

  9. Fan, C.M., Liu, T.J., Liu, K.H.: Half wavelet attention on m-net+ for low-light image enhancement. In: 2022 IEEE International Conference on Image Processing (ICIP), pp. 3878–3882. IEEE (2022)

    Google Scholar 

  10. Fu, X., Zeng, D., Huang, Y., Zhang, X.P., Ding, X.: A weighted variational model for simultaneous reflectance and illumination estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2782–2790 (2016)

    Google Scholar 

  11. Fu, Z., Yang, Y., Tu, X., Huang, Y., Ding, X., Ma, K.K.: Learning a simple low-light image enhancer from paired low-light instances. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22252–22261 (2023)

    Google Scholar 

  12. Genova, K., Cole, F., Sud, A., Sarna, A., Funkhouser, T.: Local deep implicit functions for 3D shape. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4857–4866 (2020)

    Google Scholar 

  13. Genova, K., Cole, F., Vlasic, D., Sarna, A., Freeman, W.T., Funkhouser, T.: Learning shape templates with structured implicit functions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7154–7164 (2019)

    Google Scholar 

  14. Goldman, D.B.: Vignette and exposure calibration and compensation. IEEE Trans. Pattern Anal. Mach. Intell. 32(12), 2276–2288 (2010)

    Article  Google Scholar 

  15. Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1780–1789 (2020)

    Google Scholar 

  16. Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2016)

    Article  MathSciNet  Google Scholar 

  17. Hai, J., et al.: R2RNet: low-light image enhancement via real-low to real-normal network. J. Vis. Commun. Image Represent. 90, 103712 (2023)

    Article  Google Scholar 

  18. Hao, S., Han, X., Guo, Y., Xu, X., Wang, M.: Low-light image enhancement with semi-decoupled decomposition. IEEE Trans. Multimedia 22(12), 3025–3038 (2020)

    Article  Google Scholar 

  19. Huang, Y., Zha, Z.J., Fu, X., Hong, R., Li, L.: Real-world person re-identification via degradation invariance learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14084–14094 (2020)

    Google Scholar 

  20. Jia, M., Xu, J., Yang, R., Li, Z., Zhang, L., Wu, Y.: Three filters for the enhancement of the images acquired from fluorescence microscope and weak-light-sources and the image compression. Heliyon 9(9) (2023)

    Google Scholar 

  21. Jiang, Y., et al.: Enlightengan: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Article  Google Scholar 

  22. Kim, H., Choi, S.M., Kim, C.S., Koh, Y.J.: Representative color transform for image enhancement. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4459–4468 (2021)

    Google Scholar 

  23. Kimmel, R., Elad, M., Shaked, D., Keshet, R., Sobel, I.: A variational framework for retinex. Int. J. Comput. Vision 52, 7–23 (2003)

    Article  Google Scholar 

  24. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  25. Koh, J., Lee, J., Yoon, S.: BNUDC: a two-branched deep neural network for restoring images from under-display cameras. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1950–1959 (2022)

    Google Scholar 

  26. Land, E.H., McCann, J.J.: Lightness and retinex theory. Josa 61(1), 1–11 (1971)

    Article  Google Scholar 

  27. Lee, J., Choi, K.P., Jin, K.H.: Learning local implicit fourier representation for image warping. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13678, pp. 182–200. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19797-0_11

    Chapter  Google Scholar 

  28. Lee, J., Jin, K.H.: Local texture estimator for implicit representation function. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1929–1938 (2022)

    Google Scholar 

  29. Li, C., Qu, X., Gnanasambandam, A., Elgendy, O.A., Ma, J., Chan, S.H.: Photon-limited object detection using non-local feature matching and knowledge distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3976–3987 (2021)

    Google Scholar 

  30. Li, C., et al.: Embedding fourier for ultra-high-definition low-light image enhancement. In: ICLR (2023)

    Google Scholar 

  31. Li, C., et al.: Low-light image and video enhancement using deep learning: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 9396–9416 (2021)

    Article  Google Scholar 

  32. Li, L., Qiao, H., Ye, Q., Yang, Q.: Metadata-based raw reconstruction via implicit neural functions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18196–18205 (2023)

    Google Scholar 

  33. Li, M., Liu, J., Yang, W., Sun, X., Guo, Z.: Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 27(6), 2828–2841 (2018)

    Article  MathSciNet  Google Scholar 

  34. Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10561–10570 (2021)

    Google Scholar 

  35. Liu, S., et al.: Grounding dino: marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499 (2023)

  36. Lore, K.G., Akintayo, A., Sarkar, S.: LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 61, 650–662 (2017)

    Article  Google Scholar 

  37. Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5637–5646 (2022)

    Google Scholar 

  38. Moran, S., Marza, P., McDonagh, S., Parisot, S., Slabaugh, G.: Deeplpf: deep local parametric filters for image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  39. Naseer, M.M., Ranasinghe, K., Khan, S.H., Hayat, M., Shahbaz Khan, F., Yang, M.H.: Intriguing properties of vision transformers. In: Advances in Neural Information Processing Systems, vol. 34, pp. 23296–23308 (2021)

    Google Scholar 

  40. Ng, M.K., Wang, W.: A total variation model for retinex. SIAM J. Imag. Sci. 4(1), 345–365 (2011)

    Article  MathSciNet  Google Scholar 

  41. Peng, T., et al.: A basic tool for background and shading correction of optical microscopy images. Nat. Commun. 8(1) (2017). https://doi.org/10.1038/ncomms14836

  42. Pizer, S.M.: Contrast-limited adaptive histogram equalization: speed and effectiveness stephen m. pizer, r. eugene johnston, james p. ericksen, bonnie c. yankaskas, keith e. muller medical image display research group. In: Proceedings of the First Conference on Visualization in Biomedical Computing, Atlanta, Georgia, vol. 337, p. 2 (1990)

    Google Scholar 

  43. Ren, X., Yang, W., Cheng, W.H., Liu, J.: LR3M: robust low-light enhancement via low-rank regularized retinex model. IEEE Trans. Image Process. 29, 5862–5876 (2020)

    Article  MathSciNet  Google Scholar 

  44. Sitzmann, V., Martel, J., Bergman, A., Lindell, D., Wetzstein, G.: Implicit neural representations with periodic activation functions. In: Advances in Neural Information Processing Systems, vol. 33, pp. 7462–7473 (2020)

    Google Scholar 

  45. Smith, K., et al.: CIDRE: an illumination-correction method for optical microscopy. Nat. Methods 12(5), 404–406 (2015)

    Google Scholar 

  46. Sun, Y., Liu, J., Xie, M., Wohlberg, B.E., Kamilov, U.S.: CoIL: coordinate-based internal learning for imaging inverse problems. IEEE Trans. Comput. Imaging 7 (2021). https://doi.org/10.1109/TCI.2021.3125564. https://www.osti.gov/biblio/1883143

  47. Wu, H., Zheng, S., Zhang, J., Huang, K.: Fast end-to-end trainable guided filter. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1838–1847 (2018)

    Google Scholar 

  48. Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: retinex-based deep unfolding network for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5901–5910 (2022)

    Google Scholar 

  49. Xu, X., Wang, S., Wang, Z., Zhang, X., Hu, R.: Exploring image enhancement for salient object detection in low light images. ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 17(1s), 1–19 (2021)

    Google Scholar 

  50. Yang, S., Ding, M., Wu, Y., Li, Z., Zhang, J.: Implicit neural representation for cooperative low-light image enhancement. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12918–12927 (2023)

    Google Scholar 

  51. Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J.: From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3063–3072 (2020)

    Google Scholar 

  52. Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J.: From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  53. Yang, W., et al.: Advancing image understanding in poor visibility environments: a collective benchmark study. IEEE Trans. Image Process. 29, 5737–5752 (2020)

    Article  Google Scholar 

  54. Zhang, Y., Guo, X., Ma, J., Liu, W., Zhang, J.: Beyond brightening low-light images. Int. J. Comput. Vision 129, 1013–1037 (2021)

    Article  Google Scholar 

  55. Zhang, Y., Zhang, J., Guo, X.: Kindling the darkness: a practical low-light image enhancer. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 1632–1640 (2019)

    Google Scholar 

  56. Zhang, Z., Zheng, H., Hong, R., Xu, M., Yan, S., Wang, M.: Deep color consistent network for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1899–1908 (2022)

    Google Scholar 

  57. Zhao, L., Lu, S.P., Chen, T., Yang, Z., Shamir, A.: Deep symmetric network for underexposed image enhancement with recurrent attentional learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12075–12084 (2021)

    Google Scholar 

Download references

Acknowledgements

Tomáš Chobola is supported by the Helmholtz Association under the joint research school “Munich School for Data Science - MUDS”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomáš Chobola .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 844 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chobola, T., Liu, Y., Zhang, H., Schnabel, J.A., Peng, T. (2025). Fast Context-Based Low-Light Image Enhancement via Neural Implicit Representations. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15144. Springer, Cham. https://doi.org/10.1007/978-3-031-73016-0_24

Download citation

Keywords

Publish with us

Policies and ethics