-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any one get any video result by training this project? #34
Comments
yes, got reasonable results using this code, I had a training dataset of around 3,000 videos in .gif format. Ran till 100000 iterations. During the training process the model parameters will be saved to './results' folder. Using the following sampling code you can load the trained parameters and generate videos: batch_size = 1 model = Unet3D( diffusion = GaussianDiffusion( trainer = Trainer( trainer.load(-1) input_condition = torch.randn(batch_size, 768) output_Gif_Tensor = diffusion.sample(cond = input_condition, batch_size = batch_size) video_tensor_to_gif(output_Gif_Tensor[0], './output.gif') |
Which dataset you used ? |
custom dataset, around 10K training samples |
Can I use this model to generate growing GIFs using short GIFs? |
The input & output number of frames are fixed and this parameter can be changed during training/inference, also the current code does not support taking .gifs as condition to generate outputs based on the given condition. |
When I use your sampling script, I find that the generated gif is different from the gif I sampled. Is there any solution? |
No description provided.
The text was updated successfully, but these errors were encountered: