You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing your code on github. I have a question regarding the configs.
In the config for ImageNet and CelebA, there are two lines that seem to be more relevant for the lsun datasets. lsun_categories_train: [bedroom_train] lsun_categories_test: [bedroom_test]
Are they simply ignored in these cases, because they are not using the lsun dataset?
Also, for celebA and the lsun dataset examples, does nlabels=1 indicate that there is only one sample per category.
Lastly, for lsun_bridges, it looks like training and testing is done on the same data (_train). Is that intentional?
The text was updated successfully, but these errors were encountered:
Yes, lsun_categories_train and lsun_categories_test are ignored when not training on LSUN. nlabels=1 simply indicates that there are no categories for celebA.
For your last question: we currently only have a train set, as we currently only measure inception score, which does not require a separate test set.
Hi,
Thank you for sharing your code on github. I have a question regarding the configs.
In the config for ImageNet and CelebA, there are two lines that seem to be more relevant for the lsun datasets.
lsun_categories_train: [bedroom_train]
lsun_categories_test: [bedroom_test]
Are they simply ignored in these cases, because they are not using the lsun dataset?
Also, for celebA and the lsun dataset examples, does
nlabels=1
indicate that there is only one sample per category.Lastly, for lsun_bridges, it looks like training and testing is done on the same data (_train). Is that intentional?
The text was updated successfully, but these errors were encountered: