-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
debugging custom models #107
Comments
I've managed to fine-tune an existing model with these steps:
data:
target: main.DataModuleFromConfig
params:
batch_size: 5
num_workers: 8
train:
target: taming.data.custom.CustomTrain
params:
training_images_list_file: some/training.txt
size: 256
validation:
target: taming.data.custom.CustomTest
params:
test_images_list_file: some/test.txt
size: 256
|
Thanks heaps @mrapplexz - this is indeed working well for me. So far I'm surprised how powerful even 100 iterations of fine tuning is (I'll probably tweak the learning rate down, etc.) but this recipe was hugely helpful getting me unblocked! |
@mrapplexz @dribnet hi, Thank you for your amazing ideas, but there are some points confused me. When resuming the model, how to set the training steps? e.g., , I have 1M images. |
And I have another question as showed issues/93, If use different dataset (e.g., medical Image dataset) to finetune the method, the parameter |
Hello, thank you very much for your answer. It has been very helpful to me. I used Python - m pytorch lighting. utilities. upgrade_checkpoint -- file logs/must_finish/vq_f8_16384/checkpoints/last.ckpt |
TL;DR: custom training is great! is there a good config or way to debug quality of result on small-ish datasets?
I've managed to train my own custom models using the excellent additions provided by @rom1504 in #54 and have hooked this up to clip + vqgan back propagation successfully. However so far the samples from my models are a bit glitchy. For example, with a custom dataset of images such as the following:
I'm only able to get a sample that looks something like this:
Or similarly when I train on a dataset of sketches and images like these:
My clip + vqgan back propagation of "spider" with that model turns out like this:
So there is evidence that the model is picking up some gross information such as color distributions, but the results are far from what I would expect using a simpler model such as SyleGan on the same dataset.
So my questions:
The text was updated successfully, but these errors were encountered: