[Paper]
Pytorch implementation of Neural Ray-Tracing for reconstructing scenes under known, dynamic lighting conditions.
In order to run the code, you can run any of the following commands:
make nerv_point
make dtu_diffuse
make dtu_diffuse_lit
- torch 1.8
- tqdm
- numpy
- matplotlib
- imageio
Optional:
pytorch_msssim
Neural Ray-tracing is an extension on top of NeRF & VolSDF to allow for efficient ray-marching, so that dynamic lighting conditions can be rendered. This is done by adding an additional network that accounts for lighting based on position and viewing direction, as well as learning correct surfaces such that an SDF can be quickly raymarched.
This allows for learning known lighting conditions, and then immediate generalization to new lighting conditions.
Our new collocated light dataset can be found at this Google Drive.
In order to get the NeRV dataset, please contact the NeRV authors.
For the DTU dataset, you can run the script here.
- Collocated NeRF Dataset
We re-render the NeRF dataset with collocated point lights, and show that we are better able to distinguish shadows and other lighting dependent effects.
- NeRV
We reconstruct NeRV's point light dataset, showing that collocated lights are not necessary for reconstruction.
- DTU (Recovery & Relighting)
We also show that we do not need to know lighting conditions at all if we are only interested in reconstruction of an object. This is done by learning lighting jointly with the BRDF and surface. We demonstrate that our method learns accurate lighting by then relighting scenes with a single point light.
@misc{knodt2021neural,
title={Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and View Synthesis},
author={Julian Knodt and Joe Bartusek and Seung-Hwan Baek and Felix Heide},
year={2021},
eprint={2104.13562},
archivePrefix={arXiv},
primaryClass={cs.CV}
}