Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the multi-scale traing #62

Open
suzhenghang opened this issue Dec 31, 2017 · 5 comments
Open

About the multi-scale traing #62

suzhenghang opened this issue Dec 31, 2017 · 5 comments

Comments

@suzhenghang
Copy link

Hi @soeaver ,
I try to add the multi-scale traing, but the convergence seems to be difficult; Without multi-scale traing, converge quickly. Do you meet this situation? Thanks in advance

@suzhenghang
Copy link
Author

During training, some loss will be larger, such as 0.99, 0.73 ..., I try to imshow the preprocessed image and mask, I do not find something wrong

@soeaver
Copy link
Owner

soeaver commented Dec 31, 2017

Hi, you mean multi-scale training for semantic seg?
Usually, ms training will lead to a slightly larger and unstable loss. I think you can look at the final result of the training.

@suzhenghang
Copy link
Author

@soeaver , Thanks, ms training does lead to unstable loss.By the way, does ms training increase the IOU in your experiments?

@soeaver
Copy link
Owner

soeaver commented Jan 2, 2018

Actually, I didn't do a single-scale training experiment.
But as many papers say, ms training and random flipping will improve 1-3% mIoU as PASCAL VOC dataset.

@shiyuangogogo
Copy link

Hi. @suzhenghang @soeaver . How do you implement multi-scale traing in Caffe?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants