Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

continue training with .caffemodel #950

Open
whansk50 opened this issue Jan 18, 2022 · 1 comment
Open

continue training with .caffemodel #950

whansk50 opened this issue Jan 18, 2022 · 1 comment

Comments

@whansk50
Copy link

Hi
I'm using this codes with Python3.8 (with modifying some codes that can execute on Python3, because code was made with 2.7)
Everything's okay, but some errors appear while training.
Fixing codes spend not much time, but (maybe) cause of using single gpu, it takes too much time.
Moreover I only knew command from README that execute training from the beginning (./experiments/scripts/faster_rcnn_end2end(or alt_opt).sh ~~~) so just did that command regardless of getting iterations 70,000 at previous training, again and again unwillingly.

Well, I could see the caffemodel file (I think it is saved when iterations got 10,000 times) in directory following README, so what I want to know is :

Can I continue training with previous stopped iterations using caffemodel file or other files? If I can, what command is that?

Thanks for reading with my poor english

@karandeepdps
Copy link

Yes, you can resume the training from a previous checkpoint. A checkpoint includes not only the model architecture and weights (the .caffemodel file) but also the state of the optimizer, the epoch, the learning rate scheduler, and so on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants