Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

General questions about parameters in the pre-trained model #43

Open
FlorianeSchreiber opened this issue Feb 8, 2022 · 1 comment
Open

Comments

@FlorianeSchreiber
Copy link

Hello Rémi,

Hope you are doing well. I have several general questions, notably about the parameters you used for your pre-trained model.
If I understood well, you used 15k patchs and the number of pixels are : LR : 128x128 ; HR : 512 x 512, and a batch size of 4 ? Is that right ?
Approximately, how many epochs did you use ? (~10, ~50, ~100, ~200, more ?)
What are the weights you used for the VGGnet ? In the literature they often use "imagenet" weights, and I would know if you use this one, or another one.

Thank you 😊

Floriane

@remicres
Copy link
Owner

remicres commented Feb 8, 2022

Hi,

Here is what we used:

  • LR: 64
  • HR: 256
  • Batch size: 4
  • 150k patches
  • Don't remember the number of epochs, or learning rate. Best is to experiment with your own data. The "good looking" thing is very subjective, so there isn't a universal recipe here :) stop the training when the network doesn't change the results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants