Junbo Li1
Florian Hahlbohm1
Timon Scholz1
Martin Eisemann1
Jan-Philipp Tauscher1
Marcus Magnor1,2
1Computer Graphics Lab, TU Braunschweig, Germany
2University of New Mexico, USA
We use OmniBlender and Ricoh360 datasets from EgoNeRF, and also our own dataset RaR (Roaming and Rounding). Download link: RaR_pano.
Remarks: The above link only refers to the 360-degree panoramic image data. Our RaR dataset also provides perspective image data. The full version can be downloaded from our project page. The original image resolution is about 4K, and you can use mogrify to downscale it, for example:
mogrify -quality 100 -resize 50% -path your_data_path/images_2 your_data_path/images/*.pngAlso, we used original images from our RaR dataset to do the evaluation in our paper. Because we had to keep the published data anonymous (e.g., blurring license plates and faces), the final results might be slightly different from the ones listed in our paper and supplemental material. The new results of SPaGS on the published version of the RaR dataset are listed below:
| Metric | I_alley | I_avenue | I_bridge | I_bypath | I_garden | O_car | O_lion | O_statuary | O_stone | O_windmill | Mean |
|---|---|---|---|---|---|---|---|---|---|---|---|
| PSNR | 26.096 | 28.650 | 23.514 | 30.675 | 25.396 | 26.385 | 28.459 | 25.130 | 25.112 | 26.567 | 26.598 |
| SSIM | 0.837 | 0.874 | 0.742 | 0.916 | 0.752 | 0.878 | 0.880 | 0.817 | 0.829 | 0.799 | 0.832 |
| LPIPS | 0.254 | 0.219 | 0.327 | 0.227 | 0.293 | 0.180 | 0.185 | 0.264 | 0.212 | 0.273 | 0.243 |
First follow the instructions to clone and install our NeRFICG framework, then go to the directory your_path/nerficg.
Then clone this repository into path src/Methods of our framework and install it with:
./scripts/install.py -m SPaGSCurrently, the dataloaders for the RaR and EgoNeRF datasets (OmniBlender and Ricoh360) have to be moved from src/Methods/SPaGS/dataloader to src/Datasets directory, you can use the script:
./src/Methods/SPaGS/move_dataloader.shFirst create a configuration file for training (you can also add the -a or --all wildcard to create configuration files for the entire dataset at once):
./scripts/defaultConfig.py -m SPaGS -d <DATASET_TYPE> -o <CONFIG_NAME>Then train a model with created configuration file:
./scripts/train.py -c configs/<CONFIG_NAME>.yamlor to train the entire dataset at once using:
./scripts/sequentialTrain.py -d configs/<CONFIGS_FOLDER_NAME>Remarks:
- If your GPU have a large VRAM, you can set the
TO_DEVICEvariable totrue(default is false). This will make training much faster. - Set
IMAGE_SCALE_FACTORto0.5to use a resolution close to Full HD, the default is the original image resolution (~4K). - The near plane is adjusted since some objects in certain scenes of the OmniBlender dataset are extremely close to the virtual camera. The values are:
0.05for thefisher-hutscene,0.01for thearchiviz-flat,barbershop,classroom,restroom,LOUscenes, and0.1for the rest. All the other parameters are all the same as listed in our supplemental material (most of which are already included in the default configuration).
To visualize the trained model, you can use the GUI from our framework:
./scripts/gui.pythen select the folder that contains the model you want to visualize.
This work is licensed under the MIT license (see LICENSE).
If you use this code for your research, please cite:
@article{li2025spags,
title = {{SP}a{GS}: Fast and Accurate 3D Gaussian Splatting for Spherical Panoramas},
author = {Li, Junbo and Hahlbohm, Florian and Scholz, Timon and Eisemann, Martin and Tauscher, Jan-Philipp and Magnor, Marcus},
journal = {Computer Graphics Forum},
doi = {10.1111/cgf.70171},
volume = {44},
number = {4},
month = {Jun},
year = {2025}
}