ZeroGS: Training 3D Gaussian Splatting from Unposed Images

Abstract


Neural radiance fields (NeRF) and 3D Gaussian Splatting (3DGS) are popular techniques to reconstruct and render photo-realistic images. However, the pre-requisite of running Structure-from-Motion (SfM) to get camera poses limits their completeness. While previous methods can reconstruct from a few unposed images, they are not applicable when images are unordered or densely captured. In this work, we propose ZeroGS to train 3DGS from hundreds of unposed and unordered images. Our method leverages a pretrained foundation model as the neural scene representation. Since the accuracy of the predicted pointmaps does not suffice for accurate image registration and high-fidelity image rendering, we propose to mitigate the issue by initializing and finetuning the pretrained model from a seed image. Images are then progressively registered and added to the training buffer, which is further used to train the model. We also propose to refine the camera poses and pointmaps by minimizing a point-to-camera ray consistency loss across multiple views. Experiments on the LLFF dataset, the MipNeRF360 dataset, and the Tanks-and-Temples dataset show that our method recovers more accurate camera poses than state-of-the-art pose-free NeRF/3DGS methods, and even renders higher quality images than 3DGS with COLMAP poses.

Watch ZeroGS reconstructed camera poses


We visualize the incremental reconstruction process of ZeroGS on the MipNeRF360 and Tanks-and-Temples dataset. Use the controls to switch between scenes.

Watch ZeroGS reconstructed scenes


We visualize the reconstructed camera poses and pointmaps of ZeroGS on the MipNeRF360 and Tanks-and-Temples dataset. Use the controls to switch between scenes.

Incremental training pipeline of ZeroGS


Our method follows the classical incremental SfM reconstruction pipeline with the key difference that the input is no longer an image but a pair of images in a progressively updated training buffer. The scene regressor network is trained as follows:
(1) Use Spann3R as the scene regressor network to predict 3D Gaussians and pointmaps from a pair of images.
(2) Leverage RANSAC and a PnP solver to obtain the initial camera poses based on direct 2D-3D correspondences.
(3) Refine the coarse camera poses by minimizing the point-to-ray consistency loss between 3D tracks and camera centers.
(4) Rasterize the 3D Gaussians with the refined camera poses to render images. An RGB loss is adopted for back-propagating gradients.
(5) After each training epoch, we update the training buffer by registering more images.

ZeroGS novel view synthesis on LLFF dataset


CF-3DGS 3DGS ZeroGS

ZeroGS novel view synthesis on MipNeRF360 dataset


3DGS ZeroGS

ZeroGS novel view synthesis on Tanks-and-Temples dataset


3DGS ZeroGS

Please consider citing our paper


@inproceedings{yuchen2024zerogs,
        title={ZeroGS: Training 3D Gaussian Splatting from Unposed Images},
        author={Yu Chen, Rolandos Alexandros Potamias, Evangelos Ververas, Jifei Song, Jiankang Deng Gim Hee Lee},
        booktitle={arXiv},
        year={2024},
        }

Acknowledgement

This work is built upon ACE, DUSt3R, and Spann3R. The project page is based on ACE0. We sincerely thank all the authors for releasing their code.