You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use one of your share ckpt trained on Imagenet as my VQ model's pretrain.
when it starts training, the loss decreases, but only a few hundred iterations it converge to around 6.8. And when I check my image log. I found that the reconstruction image is fun, but the samples_nopix image's quality is getting worse, and the sample_det image is always a picture of a certain color:
iter 2 sample no pix:
iter 8 sample no pix:
iter 256 sample no pix:
iter 128 sample det
iter 750 sample det
Is there something wrong ? Thanks for any help !
The text was updated successfully, but these errors were encountered:
Thanks for your great work !
I meet some problem when I train an unconditional transformer on IMagenet.
here is my config file , and I run it on 1 32G V100 GPU
I use one of your share ckpt trained on Imagenet as my VQ model's pretrain.
when it starts training, the loss decreases, but only a few hundred iterations it converge to around 6.8. And when I check my image log. I found that the reconstruction image is fun, but the samples_nopix image's quality is getting worse, and the sample_det image is always a picture of a certain color:
iter 2 sample no pix:
iter 8 sample no pix:
iter 256 sample no pix:
iter 128 sample det
iter 750 sample det
Is there something wrong ? Thanks for any help !
The text was updated successfully, but these errors were encountered: