Abstract
In neural radiance fields (NeRF), generating highly realistic rendering results requires extensive sampling of rays and online query of multilayer perceptrons. However, this results in slow rendering speeds. Previous research has addressed this issue by designing faster evaluation of neural scene representations or precomputing scene properties to reduce rendering time. In this paper, we propose a real-time rendering method called PNeRF. PNeRF utilizes continuous polynomial functions to approximate spatial volume density and color information. Additionally, we separate the view direction information from the rendering equation, leading to a new expression for the volume rendering equation. By taking the starting coordinates of the observation viewpoint and the observation direction vector as inputs to the neural network, we obtain the rendering result for the corresponding observation ray. Thus, the rendering for each ray only requires a single forward inference of the neural network. To further improve rendering speed, we design a six-axis spherical method to store the rendering results corresponding to the starting coordinates of the observation viewpoint and the observation direction vector. This allows us to significantly improve the rendering speed and maintain the rendering quality, with minimal storage space requirements. Experimental validation on LLFF datasets demonstrates that our method improves rendering speed while preserving rendering quality and requiring minimal storage space. These results indicate the potential of our method in the real-time rendering field, providing an effective solution for more efficient rendering.











Similar content being viewed by others
Data availability
The LLFF dataset can be obtained from https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1.Our code will be made available on request.
References
Jung, Y., Kong, J., Sheng, B., Kim, J.: A transfer function design for medical volume data using a knowledge database based on deep image and primitive intensity profile features retrieval. J. Comput. Sci. Technol. 39, 320–335 (2024)
Chen, Z., Qiu, G., Li, P., Zhu, L., Yang, X., Sheng, B.: Mngnas: Distilling adaptive combination of multiple searched networks for one-shot neural architecture search. IEEE Trans. Pattern Anal. Mach. Intell. (2023). https://doi.org/10.1109/tpami.2023.3293885
Karambakhsh, A., Sheng, B., Li, P., Li, H., Kim, J., Jung, Y., Chen, C.P.: SparseVoxNet: 3-D object recognition with sparsely aggregation of 3-D dense blocks. IEEE Trans Neural Netw. Learn. Syst. 35, 532–546 (2022)
Lin, X., Sun, S., Huang, W., Sheng, B., Li, P., Feng, D.D.: EAPT: efficient attention pyramid transformer for image processing. IEEE Trans. Multimed. 25, 50–61 (2021)
Xie, Z., Zhang, W., Sheng, B., Li, P., Chen, C.P.: BaGFN: broad attentive graph fusion network for high-order feature interactions. IEEE Trans. Neural Netw. Learn. Syst. 34(8), 4499–4513 (2021)
Li, P., Sheng, B., Chen, C.P.: Face sketch synthesis using regularized broad learning system. IEEE Trans. Neural Netw. Learn. Syst. 33(10), 5346–5360 (2021)
Qin, Y., Chi, X., Sheng, B., Lau, R.W.: GuideRender: large-scale scene navigation based on multi-modal view frustum movement prediction. Vis. Comput. 39(8), 3597–3607 (2023)
Sheng, B., Li, P., Zhang, Y., Mao, L., Chen, C.P.: GreenSea: visual soccer analysis using broad learning system. IEEE Trans. Cybern. 51(3), 1463–1477 (2020)
Chen, Z., Gao, T., Sheng, B., Li, P., Chen, C.P.: Outdoor shadow estimating using multiclass geometric decomposition based on bls. IEEE Trans. Cybern. 50(5), 2152–2165 (2018)
Aouaidjia, K., Sheng, B., Li, P., Kim, J., Feng, D.D.: Efficient body motion quantification and similarity evaluation using 3-D joints skeleton coordinates. IEEE Trans. Syst. Man Cybern. Syst. 51(5), 2774–2788 (2019)
Kamel, A., Liu, B., Li, P., Sheng, B.: An investigation of 3D human pose estimation for learning Tai Chi: a human factor perspective. Int. J. Human-Comput. Interact. 35(4–5), 427–439 (2019)
Zeghoud, S., Ali, S.G., Ertugrul, E., Kamel, A., Sheng, B., Li, P., Chi, X., Kim, J., Mao, L.: Real-time spatial normalization for dynamic gesture classification. Vis. Comput. 38, 1345–1357 (2022)
Karambakhsh, A., Kamel, A., Sheng, B., Li, P., Yang, P., Feng, D.D.: Deep gesture interaction for augmented anatomy learning. Int. J. Inf. Manag. 45, 328–336 (2019)
Lombardi, S., Simon, T., Saragih, J., Schwartz, G., Lehrmann, A., Sheikh, Y.: Neural volumes: learning dynamic renderable volumes from images. arXiv preprint arXiv:1906.07751 (2019)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)
Park, K., Sinha, U., Barron, J.T., Bouaziz, S., Goldman, D.B., Seitz, S.M., Martin-Brualla, R.: Nerfies: deformable neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5865–5874 (2021)
Gafni, G., Thies, J., Zollhofer, M., Nießner, M.: Dynamic neural radiance fields for monocular 4D facial avatar reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8649–8658 (2021)
Mirzaei, A., Aumentado-Armstrong, T., Derpanis, K.G., Kelly, J., Brubaker, M.A., Gilitschenski, I., Levinshtein, A.: SPIn-NeRF: multiview segmentation and perceptual inpainting with neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20669–20679 (2023)
Zhu, J., Huo, Y., Ye, Q., Luan, F., Li, J., Xi, D., Wang, L., Tang, R., Hua, W., Bao, H., et al.: I2-SDF: intrinsic indoor scene reconstruction and editing via raytracing in neural SDFS. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12489–12498 (2023)
Bi, S., Xu, Z., Srinivasan, P., Mildenhall, B., Sunkavalli, K., Hašan, M., Hold-Geoffroy, Y., Kriegman, D., Ramamoorthi, R.: Neural reflectance fields for appearance acquisition. arXiv preprint arXiv:2008.03824 (2020)
Bi, S., Xu, Z., Sunkavalli, K., Hašan, M., Hold-Geoffroy, Y., Kriegman, D., Ramamoorthi, R.: Deep reflectance volumes: Relightable reconstructions from multi-view photometric images. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16, pp. 294–311. Springer (2020)
Rebain, D., Jiang, W., Yazdani, S., Li, K., Yi, K.M., Tagliasacchi, A.: DeRF: decomposed radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14153–14161 (2021)
Neff, T., Stadlbauer, P., Parger, M., Kurz, A., Mueller, J.H., Chaitanya, C.R.A., Kaplanyan, A., Steinberger, M.: Donerf: Towards real-time rendering of compact neural radiance fields using depth oracle networks. In: Computer Graphics Forum, vol. 40, pp. 45–59. Wiley Online Library (2021)
Lindell, D.B., Martel, J.N., Wetzstein, G.: Autoint: Automatic integration for fast neural volume rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14556–14565 (2021)
Liu, L., Gu, J., Zaw Lin, K., Chua, T.-S., Theobalt, C.: Neural sparse voxel fields. Adv. Neural Inf. Process. Syst. 33, 15651–15663 (2020)
Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 24 (2009)
Galliani, S., Lasinger, K., Schindler, K.: Gipuma: massively parallel multi-view stereo reconstruction. Publikationen der Deutschen Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation e. V 25(361-369), 2 (2016)
De Bonet, J.S., Viola, P.: Poxels: Probabilistic voxelized volume reconstruction. In: Proceedings of International Conference on Computer Vision (ICCV), vol. 2, p. 2. CiteSeer (1999)
Broadhurst, A., Drummond, T.W., Cipolla, R.: A probabilistic framework for space carving. In: Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 1, pp. 388–393. IEEE (2001)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Furukawa, Y., Ponce, J.: Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 32(8), 1362–1376 (2009)
Kazhdan, M., Hoppe, H.: Screened poisson surface reconstruction. ACM Trans. Graphics 32(3), 1–13 (2013)
Flynn, J., Broxton, M., Debevec, P., DuVall, M., Fyffe, G., Overbeck, R., Snavely, N., Tucker, R.: DeepView: view synthesis with learned gradient descent. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2367–2376 (2019)
Mildenhall, B., Srinivasan, P.P., Ortiz-Cayon, R., Kalantari, N.K., Ramamoorthi, R., Ng, R., Kar, A.: Local light field fusion: practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graphics 38(4), 1–14 (2019)
Sitzmann, V., Thies, J., Heide, F., Nießner, M., Wetzstein, G., Zollhofer, M.: DeepVoxels: learning persistent 3D feature embeddings. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2437–2446 (2019)
Srinivasan, P.P., Tucker, R., Barron, J.T., Ramamoorthi, R., Ng, R., Snavely, N.: Pushing the boundaries of view extrapolation with multiplane images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 175–184 (2019)
Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: PlenOctrees for real-time rendering of neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5752–5761 (2021)
Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: Radiance fields without neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5501–5510 (2022)
Hu, T., Liu, S., Chen, Y., Shen, T., Jia, J.: EfficientNeRF efficient neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12902–12911 (2022)
Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.: Baking neural radiance fields for real-time view synthesis. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5875–5884 (2021)
Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855–5864 (2021)
Reiser, C., Szeliski, R., Verbin, D., Srinivasan, P., Mildenhall, B., Geiger, A., Barron, J., Hedman, P.: MERF: memory-efficient radiance fields for real-time view synthesis in unbounded scenes. ACM Trans. Graphics 42(4), 1–12 (2023)
Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D gaussian splatting for real-time radiance field rendering. ACM Trans. Graphics 42(4), 1–14 (2023)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)
Deng, B., Barron, J.T., Srinivasan, P.P.: JaxNeRF: an efficient JAX implementation of nerf. http://github.com/googleresearch/google-research/tree/master/jaxnerf (2020)
Acknowledgements
This work is supported by the National Key R&D Program of China (2022YFB4501600).
Author information
Authors and Affiliations
Contributions
Liping Zhu involved in conceptualization and resources. Haibo Zhou involved in methodology, writing—original draft preparation, visualization, and writing—reviewing and editing. Silin Wu involved in visualization and investigation. Tianrong Chen involved in reviewing. Hongjun Sun involved in conceptualization and resources.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhu, L., Zhou, H., Wu, S. et al. Polynomial for real-time rendering of neural radiance fields. Vis Comput 41, 4287–4300 (2025). https://doi.org/10.1007/s00371-024-03660-4
Accepted:
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1007/s00371-024-03660-4
