Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why can't I reproduce the beautiful results shown by the author even using the weight file provided by the author on huggingface #37

Open
LiuHuijie6410 opened this issue May 16, 2024 · 0 comments

Comments

@LiuHuijie6410
Copy link

LiuHuijie6410 commented May 16, 2024

Congratulations on such a beautiful job!
However, I have always been unable to reproduce the exquisite results shown by the author. Have I done something wrong somewhere?
For example, I use the lora weights from huggingface provided by the author(e.g. golf https://huggingface.co/ruizhaocv/MotionDirector/tree/main/playing_golf)
Then I use the same random seed provided by the author for inference,
python MotionDirector_inference.py --model "models/zeroscope_v2_576w" --prompt "A monkey is playing golf on a field full of flowers." --checkpoint_folder /MotionDirector/huggingface/playing_golf/ --checkpoint_index 300 --noise_prior 0. --seed 2989633
I get the output,
A_monkey_is_playing_golf_on_a_field_full_of_flowers_2989633

The video shown by the author is
A_monkey_is_playing_golf_on_a_field_full_of_flowers_2989633

May I ask what I have done wrong that may prevent me from reproducing the effect presented by the author?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant