3D Gaussian Splatting has recently gained traction for its efficient training and real-time rendering. While its vanilla representation is mainly designed for view synthesis, recent works extended it to scene understanding with language features. However, storing additional high-dimensional features per Gaussian for semantic information is memory-intensive, which limits their ability to segment and interpret challenging scenes. To this end, we introduce SuperGSeg, a novel approach that fosters cohesive, context-aware hierarchical scene representation by disentangling segmentation and language field distillation. SuperGSeg first employs neural 3D Gaussians to learn geometry, instance and hierarchical segmentation features from multi-view images with the aid of off-the-shelf 2D masks. These features are then leveraged to create a sparse set of Super-Gaussians. Super-Gaussians facilitate the lifting and distillation of 2D language features into 3D space. They enable hierarchical scene understanding with high-dimensional language feature rendering at moderate GPU memory costs. Extensive experiments demonstrate that SuperGSeg achieves remarkable performance on both open-vocabulary object selection and semantic segmentation tasks.
We introduce how Super-Gaussians group 3D Gaussians in 3D space. By employing graph-based connected component analysis, Super-Gaussians can be further organized into Instances and Parts of an Instance. As illustrated in the teaser, Super-Gaussians enable the learning of a language feature field for open-vocabulary query tasks. Leveraging Super-Gaussian-based Instances, we support both promptable and promptless instance segmentation. Additionally, using Super-Gaussian-based Parts allows for finer-grained hierarchical segmentation.
Given an arbitrary text query, SuperGSeg can directly segment 3D Gaussians in 3D space and render the corresponding masks from any viewpoint. SuperGSeg delivers segmentation with precise boundaries and reduced noise.
Given a visual prompt (e.g., a click) on a 2D image from any viewpoint, SuperGSeg can identify the 3D Gaussians corresponding to the clicked part in 3D space and render the part from any desired viewpoint (cross-frame). By learning both hierarchical and instance features, SuperGSeg enables the retrieval of the instance associated with the part for rendering (cross-level) and supports automatic part segmentation rendering.
@misc{liang2024supergsegopenvocabulary3dsegmentation,
title={SuperGSeg: Open-Vocabulary 3D Segmentation with Structured Super-Gaussians},
author={Siyun Liang and Sen Wang and Kunyi Li and Michael Niemeyer and Stefano Gasperini and Nassir Navab and Federico Tombari},
year={2024},
eprint={2412.10231},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.10231},
}