Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to train an unconditional Transformer #104

Open
hjq133 opened this issue Sep 8, 2021 · 2 comments
Open

Failed to train an unconditional Transformer #104

hjq133 opened this issue Sep 8, 2021 · 2 comments

Comments

@hjq133
Copy link

hjq133 commented Sep 8, 2021

Thanks for your great work !

I meet some problem when I train an unconditional transformer on IMagenet.

here is my config file , and I run it on 1 32G V100 GPU

model:
  base_learning_rate: 0.00625
  target: taming.models.cond_transformer.Net2NetTransformer
  params:
    transformer_config:
      target: taming.modules.transformer.mingpt.GPT
      params:
        vocab_size: 16384
        block_size: 256
        n_layer: 48
        n_head: 24
        n_embd: 1536
    first_stage_config:
      target: taming.models.vqgan.VQModel
      params:
        ckpt_path: /mnt/lustre/huangjunqin/taming_transformer/logs/imagenet_vqgan_f16_16384/checkpoints/last.ckpt
        embed_dim: 256
        n_embed: 16384
        ddconfig:
          double_z: false
          z_channels: 256
          resolution: 256
          in_channels: 3
          out_ch: 3
          ch: 128
          ch_mult:
          - 1
          - 1
          - 2
          - 2
          - 4
          num_res_blocks: 2
          attn_resolutions:
          - 16
          dropout: 0.0
        lossconfig:
          target: taming.modules.losses.vqperceptual.DummyLoss
    cond_stage_config: __is_unconditional__

data:
  target: main.DataModuleFromConfig
  params:
    batch_size: 1
    wrap: false
    train:
      target: taming.data.imagenet.ImageNetTrain
      params:
        config:
          size: 256

I use one of your share ckpt trained on Imagenet as my VQ model's pretrain.

when it starts training, the loss decreases, but only a few hundred iterations it converge to around 6.8. And when I check my image log. I found that the reconstruction image is fun, but the samples_nopix image's quality is getting worse, and the sample_det image is always a picture of a certain color:

iter 2 sample no pix:
samples_nopix_iter2

iter 8 sample no pix:
samples_nopix_iter8

iter 256 sample no pix:
samples_nopix_iter256


iter 128 sample det

samples_det_gs-000128_e-000000_b-000128

iter 750 sample det
samples_det_gs-000750_e-000000_b-000750


Is there something wrong ? Thanks for any help !

@hjq133 hjq133 changed the title Is there anything wrong with the steps I train the transformer? Failed to train an unconditional Transformer Sep 9, 2021
@order-a-lemonade
Copy link

hi, i meet the same problem, have you solved it?

@order-a-lemonade
Copy link

hi, i meet the same problem, have you solved it?

i find the reason is my edition of source code. so just use the source code is ok.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants