Skip to content

Code of the CVPR 2022 paper: Neural Reflectance for Shape Recovery with Shadow Handling

License

Notifications You must be signed in to change notification settings

junxuan-li/Neural-Reflectance-PS

Repository files navigation

Neural Reflectance for Shape Recovery with Shadow Handling

Junxuan Li, and Hongdong Li. CVPR 2022 (Oral Presentation).

We proposed a method for Photometric Stereo that

  • Formulated the shape estimation and material estimation in a self-supervised framework which explicitly predicted shadows to mitigate the errors.
  • Achieved the state-of-the-art performance in surface normal estimation and been an order of magnitude faster than previous methods.
  • Suitable for applications in AR/VR such as object relighting and material editing.

Keywords: Shape estimation, BRDF estimation, inverse rendering, unsupervised learning, shadow estimation.

Our object intrinsic decomposition

Our object relighting

Our material relighting

If you find our code or paper useful, please cite as

@inproceedings{li2022neural,
  title={Neural Reflectance for Shape Recovery with Shadow Handling},
  author={Li, Junxuan and Li, Hongdong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={16221--16230},
  year={2022}
}

Dependencies

First, make sure that all dependencies are in place. We use anaconda to install the dependencies.

To create an anaconda environment called neural_reflectance, run

conda env create -f environment.yml
conda activate neural_reflectance

Quick Test on DiLiGenT main dataset

Our method is tested on the DiLiGenT main dataset.

To reproduce the results in the paper, we have provided pre-computed models for quick testing. Simply run

bash configs/download_precomputed_models.sh
bash configs/test_precomputed_models.sh

The above scripts should create output folders in runs/paper_config/diligent/. The results are then available in runs/paper_config/diligent/*/est_normal.png for visualization.

Train from Scratch

DiLiGenT Datasets

First, you need to download the DiLiGenT main dataset and unzip the data to this folder data/DiLiGenT/.

After you have downloaded the data, run

python train.py --config configs/diligent/reading.yml

to test on each object. You can replace configs/diligent/reading.yml with to other yml files for testing on other objects.

Alternatively, you can run

bash configs/train_from_scratch.sh

This script should run and test all the 10 objects in data/DiLiGenT/pmsData/* folder. And the output is stored in runs/paper_config/diligent/*.

Gourd&Apple dataset

The Gourd&Apple dataset dataset can be downloaded in here. Then, unzip the data to this folder data/Apple_Dataset/.

After you have downloaded the data, please run

python train.py --config configs/apple/apple.yml 

to test on each object. You can replace configs/apple/apple.yml with to other yml files for testing on other objects.

Using Your Own Dataset

If you want to train a model on a new dataset, you can follow the python file load_diligent.py to write your own dataloader.

Acknowledgement

Part of the code is based on nerf-pytorch and UPS-GCNet repository.

Citation

If you find our code or paper useful, please cite as

@inproceedings{li2022neural,
  title={Neural Reflectance for Shape Recovery with Shadow Handling},
  author={Li, Junxuan and Li, Hongdong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={16221--16230},
  year={2022}
}

About

Code of the CVPR 2022 paper: Neural Reflectance for Shape Recovery with Shadow Handling

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published