Creating 3D models provides a more immersive and realistic representation of scenes than 2D images. They allow viewers to explore and interact with the scene from different angles, providing a better understanding of the spatial layout and depth of information.
These are fundamental for virtual reality (VR) and augmented reality (AR) applications. They enable the overlay of digital information onto the real world (AR) or the creation of entirely virtual environments (VR), enhancing user experiences in gaming, education, training, and various industries.
Neural Radiance Fields (NeRFs) is a computer vision technique in 3D scene reconstruction and rendering. NeRF treats a scene as a 3D volume where each point in the volume has a corresponding color (radiance) and density. The neural network learns to predict the color and density of each point based on the 2D images taken from different viewpoints.
NeRFs have multiple applications like view synthesis and depth estimation, but learning from multiview images has inherent uncertainties. Current methods to quantify them are either heuristic or computationally demanding. Researchers at Google DeepMind, Adobe Research, and the University of Toronto introduced a new technique called BayesRays.
It consists of a framework to evaluate uncertainty in any pretrained NeRF without modifying the training process. By adding a volumetric uncertainty field using spatial perturbations and a Bayesian Laplace approximation, they were able to overcome the limitations of NeRFs. Bayesian Laplace approximation is a mathematical method to approximate complex probability distributions with simpler multivariate Gaussian distributions.
Their calculated uncertainties are statistically meaningful and can be rendered as additional color channels. Their method also outperforms previous works on key metrics like correlation to reconstructed depth errors. They use a plug-and-play probabilistic approach to quantify the uncertainty of any pre-trained NeRFs independent of their architecture. Their work provides a threshold to remove artifacts from pre-trained NeRFs in real time.
They say their intuition behind formulating their method is from using the volumetric fields to model the 3D scenes. Volumetric deformation fields are often used in manipulating implicitly represented objects. Their work is also similar to photogrammetry, where reconstructing uncertainty is often modeled by placing Gaussian distributions on the spatial positions identified.
At last, they say that their algorithm is limited to quantifying the uncertainty of NeRFs and cannot be trivially translated to other frameworks. However, their future work involves a similar deformation-based Laplace approximation formulated for more recent spatial representations like 3D Gaussian splatting.
Check out the Paper and Project. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
The post How Can We Measure Uncertainty in Neural Radiance Fields? Introducing BayesRays: A Revolutionary Post-Hoc Framework for NeRFs appeared first on MarkTechPost.