Waymo researchers have developed Block-NeRF, which is essentially a variant of Neural Radiance Fields that can represent large-scale environments. This AI-powered tool was created to demonstrate that when scaling NeRF to render city-scale scenes spanning multiple blocks, it’s crucial decompose the scene into individually trained NeRFs. In other words, you’re able to decouple rendering time from scene size, thus enabling rendering to scale to arbitrarily large environments, as well as allowing per-block updates of the environment.
The team had to adopt several architectural changes to make NeRF robust to data captured over months under different environmental conditions. This included adding appearance embeddings, learning pose refinement, and controlling exposure to each individual NeRF, before introducing a procedure for aligning appearance between adjacent NeRFs so that they can be seamlessly combined.
- Powerful Productivity: AMD Ryzen 3 3350U delivers desktop-class performance and amazing battery life in a slim notebook. With Precision Boost, get up...
- Maximized Visuals: See even more on the stunning 15.6" Full HD display with 82.58% screen-to-body, 16:9 aspect ratio and narrow bezels
- Backlit Keyboard and Fingerprint Reader: Biometric fingerprint reader and Windows Hello sign-in options help keep your Acer PC secure
We build a grid of Block-NeRFs from 2.8 million images to create the largest neural scene representation to date, capable of rendering an entire neighborhood of San Francisco,” said the researchers.