You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We follow T2M-GPT choosing "last" VQ-VAE ckpt for training Part-Coordinated Transformer.
And for testing Part-Coordinated Transformer, we choose "fid" for HumanML3D and "last" for KIT dataset. Because the results on validation set seem to appear more stable when we choose in that way.
Sorry for asking so many questions.
CUDA_VISIBLE_DEVICES=0 python train_ParCo_trans.py
--vqvae-train-dir output/00000-t2m-ParCo/VQVAE-ParCo-t2m-default/
--select-vqvae-ckpt last
--exp-name ParCo
--pkeep 0.4
--batch-size 128
--trans-cfg default
--fuse-ver V1_3
--alpha 1.0
--num-layers 14
--embed-dim-gpt 1024
--nb-code 512
--n-head-gpt 16
--block-size 51
--ff-rate 4
--drop-out-rate 0.1
--total-iter 300000
--eval-iter 10000
--lr-scheduler 150000
--lr 0.0001
--dataname t2m
--down-t 2
--depth 3
--quantizer ema_reset
--dilation-growth-rate 3
--vq-act relu
You chose "last" in "select-vqvae-ckpt", but why did you choose "last"? Is there a reason why you didn't train by selecting the best fid values?
Thank you for your kind reply every time.
The text was updated successfully, but these errors were encountered: