Skip to content

[AAAI 2024] Direction-aware Video Demoiréing with Temporal-guided Bilateral Learning

License

Notifications You must be signed in to change notification settings

rebeccaeexu/DTNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DTNet

[AAAI-2024] Direction-aware Video Demoiréing with Temporal-guided Bilateral Learning

Paper Link

Requirements

  • basicsr==1.4.2
  • scikit-image==0.15.0
  • deepspeed

Prepare

  1. Download VDMoire dataset.
  2. Download the pretrined models.

Organize the directories as follows:

┬─ experiments
│   └─ Train_DTNet_resume_ipv1.yml
│	│	└─ models
│   │    	├─ DTNet_f.pth
│   │  	 	└─ DTNet_g.pth
│   └─ Train_DTNet_resume_ipv2.yml
│   └─ Train_DTNet_resume_tclv1.yml
│   └─ Train_DTNet_resume_tclv2.yml
└─ data
    ├─ homo
    │   ├─ iphone
    │   │   ├─ train
    |   |   |	├─ source
    |   |   |	|	└─ ... (image filename)
	│   |   | 	└─ target
	│   |   |		└─ ... (corresponds to the former)
	│   | 	└─ test
	│   |		└─ ..
    |   |
    │   └─ tcl
    │       └─ ... 
    └─ of
        ├─ iphone
        │   └─ ... 
        └─ tcl
            └─ ... 

How to Test

  • Example: Testing on the TCL-V2 dataset
PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0 python test.py -opt options/test/Test_DTNet_tclv2.yml

How to train

  • Single GPU training
PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0 python train.py -opt options/train/Train_DTNet_scratch_ipv1.yml
  • Distributed training
PYTHONPATH="./:${PYTHONPATH}" \
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train/Train_DTNet_scratchipv1.yml --launcher pytorch

About

[AAAI 2024] Direction-aware Video Demoiréing with Temporal-guided Bilateral Learning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published