Deep Gaussian from Motion: Exploring 3D Geometric Foundation Models for Gaussian Splatting
Abstract
Neural radiance fields (NeRF) and 3D Gaussian Splatting (3DGS) are popular techniques to reconstruct and render photorealistic images. However, the prerequisite of running Structure-from-Motion (SfM) to get camera poses limits their completeness. Although previous methods can reconstruct a few unposed images, they are not applicable when images are unordered or densely captured. In this work, we propose a method to train 3DGS from unposed images. Our method leverages a pre-trained 3D geometric foundation model as the neural scene representation. Since the accuracy of the predicted pointmaps does not suffice for accurate image registration and high-fidelity image rendering, we propose to mitigate the issue by initializing and fine-tuning the pre-trained model from a seed image. The images are then progressively registered and added to the training buffer, which is used to train the model further. We also propose to refine the camera poses and pointmaps by minimizing a point-to-camera ray consistency loss across multiple views. When evaluated on diverse challenging datasets, our method outperforms state-of-the-art pose-free NeRF/3DGS methods in terms of both camera pose accuracy and novel view synthesis, and even renders higher fidelity images than 3DGS trained with COLMAP poses.
Watch DeepGfM reconstructed camera poses
We visualize the incremental reconstruction process of DeepGfM on the MipNeRF360 and Tanks-and-Temples dataset. Use the controls to switch between scenes.
Watch DeepGfM reconstructed scenes
We visualize the reconstructed camera poses and pointmaps of DeepGfM on the MipNeRF360 and Tanks-and-Temples dataset. Use the controls to switch between scenes.
Incremental training pipeline of DeepGfM
Our method follows the classical incremental SfM reconstruction pipeline with the key difference that the
input is no longer
an image but a pair of images in a progressively updated training buffer. The scene regressor network is
trained as follows:
(1) Use Spann3R as the scene
regressor network to predict 3D Gaussians
and pointmaps from a pair of images.
(2) Leverage RANSAC and a PnP solver to obtain the initial camera poses based on direct 2D-3D correspondences.
(3) Refine the coarse camera poses by minimizing the point-to-ray consistency loss between 3D tracks and
camera centers.
(4) Rasterize the 3D Gaussians with the refined camera poses to render images. An RGB loss is adopted for
back-propagating gradients.
(5) After each training epoch, we update the training buffer by registering more images.
DeepGfM novel view synthesis on LLFF dataset
DeepGfM novel view synthesis on MipNeRF360 dataset
DeepGfM novel view synthesis on Tanks-and-Temples dataset
@inproceedings{yuchen2024zerogs,
title={Deep Gaussian from Motion: Exploring 3D Geometric Foundation Models for Gaussian Splatting},
author={Yu Chen, Rolandos Alexandros Potamias, Evangelos Ververas, Jifei Song, Jiankang Deng Gim Hee Lee},
booktitle={arXiv},
year={2024},
}
This work is built upon ACE, DUSt3R, and Spann3R. The project page is based on ACE0. We sincerely thank all the authors for releasing their code.