
Let’s face it, neural graphics primitives, parameterized by fully connected neural networks, are typically very costly to train and evaluate. This cost can be reduced with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations. All that is required is a small neural network augmented by a multi-resolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. Read more for a video and additional information.
This multi-resolution structure enables the network to disambiguate hash collisions, making for a simple architecture that is easy to parallelize on modern GPUs. The parallelism is leveraged by implementing the whole system using fully-fused NVIDIA CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. Researchers will able to achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives near instantly, and rendering in tens of milliseconds at a resolution of 1920×1080.
No products found.


