The recent advances in 3D Gaussian Splatting (3DGS) show promising results on the novel view synthesis (NVS) task. With its superior rendering performance and high-fidelity rendering quality, 3DGS is excelling at its previous NeRF counterparts. The most recent 3DGS method focuses either on improving the instability of rendering efficiency or reducing the model size. On the other hand, the training efficiency of 3DGS on large-scale scenes has not gained much attention. In this work, we propose DoGaussian, a method that trains 3DGS distributedly. Our method first decomposes a scene into K blocks and then introduces the Alternating Direction Method of Multipliers (ADMM) into the training procedure of 3DGS. During training, our DoGaussian maintains one global 3DGS model on the master node and K local 3DGS models on the slave nodes. The K local 3DGS models are dropped after training and we only query the global 3DGS model during inference. The training time is reduced by scene decomposition, and the training convergence and stability are guaranteed through the consensus on the shared 3D Gaussians. Our method accelerates the training of 3DGS by 6+ times when evaluated on large-scale scenes while concurrently achieving state-of-the-art rendering quality.
We visualize the reconstruction results of DoGaussian on the Mill-19 and UrbanScene3D dataset. The results are recorded on a browser of a MacBook (m1 chip with 8GB memory) with freely moving camera trajectories that are highly different from the training views. Use the controls to switch between scenes.
We visualize the reconstructed 3D Gaussian primitives of DoGaussian on the Mill-19 and UrbanScene3D dataset. The results are recorded on a browser of a MacBook (m1 chip with 8GB memory) with freely moving camera trajectories that are highly different from the training views. Use the controls to switch between scenes.
(1) We first split the scene into K blocks with similar sizes. Each block is extended to a larger size
to construct overlapping parts.
(2) Subsequently, we assign views and points into different blocks. The shared local 3D Gaussians
(connected by solid lines in the figure) are a copy the the global 3D Gaussians.
(3) The local 3D Gaussians are then collected and averaged to the global 3D Gaussians in each consensus
step, and the global 3D Gaussians are shared with each block before training all blocks.
(4) Finally, we use the final global 3D Gaussians to synthesize novel views.
@inproceedings{yuchen2024dogaussian, title={DoGaussian: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus}, author={Yu Chen, Gim Hee Lee}, booktitle={arXiv}, year={2024}, }