Rip-NeRF: Anti-aliasing Radiance Fields with Ripmap-Encoded Platonic Solids

SIGGRAPH 2024

Junchen Liu*2, Wenbo Hu*3, Zhuo Yang*4, Jianteng Chen4, Guoliang Wang1,
Xiaoxue Chen1, Yantong Cai5,6, Huan-ang Gao1, Hao Zhao†1
1Institute for AI Industry Research (AIR), Tsinghua University 2Beihang University
3Tencent AI Lab 4Beijing Institute of Technology 5Dermatology Hospital 6Southern Medical University
Qualitative comparison of the full-resolution and 1/8 resolution renderings of either novel-view images or
error maps of Zip-NeRF (left), Rip-NeRF (middle), and Tri-MipRF (right) on the multi-scale Blender dataset.

Qualitative and quantitative results of our Rip-NeRF and several representative baseline methods, e.g. Zip-NeRF, Tri-MipRF, etc. Rip-NeRF25k is a variant of Rip-NeRF that reduces the training iterations from 120𝑘 to 25𝑘 for better efficiency. The first and second rows in the left panel are results from the multi-scale Blender dataset and our newly captured real-world dataset, respectively. Our Rip-NeRF can render high-fidelity and aliasing-free images from novel viewpoints while maintaining efficiency.

Abstract

Despite significant advancements in Neural Radiance Fields (NeRFs), the renderings may still suffer from aliasing and blurring artifacts, since it remains a fundamental challenge to effectively and efficiently characterize anisotropic areas induced by the cone-casting procedure. This paper introduces a Ripmap-Encoded Platonic Solid representation to precisely and efficiently featurize 3D anisotropic areas, achieving high-fidelity anti-aliasing renderings. Central to our approach are two key components: Platonic Solid Projection and Ripmap encoding. The Platonic Solid Projection factorizes the 3D space onto the unparalleled faces of a certain Platonic solid, such that the anisotropic 3D areas can be projected onto planes with distinguishable characterization. Meanwhile, each face of the Platonic solid is encoded by the Ripmap encoding, which is constructed by anisotropically pre-filtering a learnable feature grid, to enable featurzing the projected anisotropic areas both precisely and efficiently by the anisotropic area-sampling. Extensive experiments on both well-established synthetic datasets and a newly captured real-world dataset demonstrate that our Rip-NeRF attains state-of-the-art rendering quality, particularly excelling in the fine details of repetitive structures and textures, while maintaining relatively swift training times.

Video

Method

To render a pixel, we first cast a cone for each pixel, and then divide the cone into multiple conical frustums, which are further characterized by anisotropic 3D Gaussians parameterized by their mean and covariance (𝝁, 𝚺). Next, to featurize a 3D Gaussian, we project it onto the unparalleled faces of the Platonic solid to form a 2D Gaussian (𝝁proj, 𝚺proj), while the Platonic solid's faces are represented by the Ripmap Encoding with learnable parameters. Subsequently, we perform tetra-linear interpolation on the Ripmap Encoding to query corresponding feature vectors for the 2D Gaussian, where the position and level used in the interpolation are determined by the mean and covariance of the 2D Gaussian, respectively. Finally, feature vectors from all Platonic solids' faces and the encoded view direction are aggregated together to estimate the color and density of the conical frustums by a tiny MLP.

Results

Qualitative comparison of the full-resolution renderings on the multi-scale Blender dataset.

Ours
Zip-NeRF

Ours
Tri-MipRF

Ours
Zip-NeRF

Ours
Tri-MipRF
Ours
Zip-NeRF

Ours
Tri-MipRF

Ours(1/8)
Zip-NeRF(1/8)

Ours(1/8)
Tri-MipRF(1/8)

BibTeX


        @inproceedings{liu2024ripnerf,
          title={Rip-NeRF: Anti-aliasing Radiance Fields with Ripmap-Encoded Platonic Solids},
          author={Liu, Junchen and Hu, Wenbo and Yang, Zhuo and Chen, Jianteng and Wang, Guoliang and Chen, Xiaoxue and Cai,
          Yantong and Gao, Huan-ang and Zhao, Hao},
          year={2024},
          booktitle={SIGGRAPH'24 Conference Proceedings},
        }