𝗗𝗮𝘆-𝟰𝟵𝟰 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints by Reality Labs Research at Meta Follow me for a similar post: Ashish Patel ------------------------------------------------------------------- 𝗜𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗶𝗻𝗴 𝗙𝗮𝗰𝘁𝘀 : 🔸 This paper is published CVPR2022. 🔸 Our key idea is to leverage keypoints as a universal representation for articulated objects to predict pixel-aligned neural radiance fields representing volumetric avatars.  🔸 Given estimated keypoints and a query point, we propose a novel relative spatial encoding that anchors pixel-aligned features. ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 👉 Image-based volumetric avatars using pixel-aligned features promise generalization to unseen poses and identities.  👉 Prior work leverages global spatial encodings and multi-view geometric consistency to reduce spatial ambiguity.  👉 However, global encodings often suffer from overfitting to the distribution of the training data, and it is difficult to learn multi-view consistent reconstruction from sparse views.  👉 In this work, we investigate common issues with existing spatial encodings and propose a simple yet highly effective approach to modeling high-fidelity volumetric avatars from sparse views.  👉 One of the key ideas is to encode relative spatial 3D information via sparse 3D keypoints.  👉 This approach is robust to the sparsity of viewpoints and cross-dataset domain gap.  👉 Our approach outperforms state-of-the-art methods for head reconstruction.  👉 On human body reconstruction for unseen subjects, we also achieve performance comparable to prior art that uses a parametric human body model and temporal feature aggregation.  👉 Our experiments show that a majority of errors in prior work stem from an inappropriate choice of spatial encoding and thus we suggest a new direction for high-fidelity image-based avatar modeling. #computervision #artificialintelligence #deeplearning #data

See more comments

To view or add a comment, sign in

Explore content categories