Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hyper-parameters for training keypoint stage && training result of finetune stage #14

Open
zht022 opened this issue Aug 25, 2019 · 0 comments

Comments

@zht022
Copy link

zht022 commented Aug 25, 2019

@nashory hi, I noticed that, when training the keypoint stage, you set use_l2_normalized_feature = True, what's the reason of setting this parameters? And what's more, I noticed that you set target_layer = layer3 by default, have you tried target_layer = layer4? If yes, which one is better?

Another question that confused me is, when I train finetune stage directly on google-landmark-dataset-top1k, I got acc1 over 97.5, how about your result on this stage?

Thank you and wait for your answer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant