Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can I improve the quality of the resulting video? And how to do this? #189

Open
Vova-B opened this issue Sep 2, 2024 · 3 comments
Open

Comments

@Vova-B
Copy link

Vova-B commented Sep 2, 2024

I want to receive videos in higher quality and resolution, what steps do I need to take to get the desired result?

@DBDXSS
Copy link

DBDXSS commented Sep 4, 2024

Change the train_width, train_height, and fps in configs/train/stage2.yaml to your desired augments. Then finetune the model.

If you want to reference only, change the augments in configs/inference/default.yaml

@Vova-B
Copy link
Author

Vova-B commented Sep 4, 2024

@DBDXSS For finetuning, do I need to run only train stage 2? If I start training with train stage 1, then it does not fine-tune the model but trains from 0. Tell me how to launch fine-tuning

@DBDXSS
Copy link

DBDXSS commented Sep 5, 2024

@Vova-B I am not a developer of this project, so I won't be able to give you a definite conclusion. The results of the two methods cannot indicate which one is better. You can also use weights for initialization of the training stage 1, in fact. Since the facial features of your fine-tuned data and the author's training data may be different, I think it will be better to fine-tune from stage1. However, this training process will take a lot of time. You can also fine-tune directly from stage2. If the effect is not good enough, then try starting from stage1 again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants