Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

results on HRSC2016 #14

Closed
jingweirobot opened this issue Jul 24, 2020 · 9 comments
Closed

results on HRSC2016 #14

jingweirobot opened this issue Jul 24, 2020 · 9 comments
Labels
good first issue Good for newcomers

Comments

@jingweirobot
Copy link

Hi Ming

We are interested in your good work based on the yolov3. We have tested this repo on the dataset HRSC2016 (all the images :1680) already. But, unfortunately, we cannot achieve that expected result-mAP around 80%. The tested results are as follows: P: 0.158, R: 0.413, mAP@0.5: 0.189, F1-score: 0.229.

The configurations are the same as the file provided in the package, hyp.py with 1800 epoches and yolov3-416 with such anchors 792, 2061, 3870, 6353, 9623, 15803 / 4.18, 6.48, 8.71 / -75, -60, -45, -30, -15 ,0,15, 30,45, 60,75, 90 (the same as the values in this file). Although these anchors are different for different datasets, they should not generate such effect on the experimental results obviously.

The model cannot detect the target in terms of most of detected images. We just show two cases with detected targets.
Hopefully we can obtain an expected performance after consulting.

Thanks in advance.

results
Screenshot from 2020-07-24 10-10-53
100001266
100000911

@WJ1214
Copy link

WJ1214 commented Jul 24, 2020

i also meet similar problem.i trained yolov3_608_dcn on cornell grasp dataset which has 885 images, and network cannot detect any results in most of images when i set conf-thres=0.5. then i set conf-thres=0.001, however the detections of this network seems like has no relationship with object in images. it seems like this network doesn't learn anything from these images.
pcd0210r
pcd0405r

@ming71
Copy link
Owner

ming71 commented Jul 24, 2020

Hi Ming

We are interested in your good work based on the yolov3. We have tested this repo on the dataset HRSC2016 (all the images :1680) already. But, unfortunately, we cannot achieve that expected result-mAP around 80%. The tested results are as follows: P: 0.158, R: 0.413, mAP@0.5: 0.189, F1-score: 0.229.

The configurations are the same as the file provided in the package, hyp.py with 1800 epoches and yolov3-416 with such anchors 792, 2061, 3870, 6353, 9623, 15803 / 4.18, 6.48, 8.71 / -75, -60, -45, -30, -15 ,0,15, 30,45, 60,75, 90 (the same as the values in this file). Although these anchors are different for different datasets, they should not generate such effect on the experimental results obviously.

The model cannot detect the target in terms of most of detected images. We just show two cases with detected targets.
Hopefully we can obtain an expected performance after consulting.

Thanks in advance.

results
Screenshot from 2020-07-24 10-10-53
100001266
100000911

hello, I think you'd better make it clear how each component and parameter works and then test on a certain dataset, cause I've told that this repo is just a backup of my source code. For example, lr is set to 0.000018 in this repo for HRSC dataset, it's obviously problematic, right? This parameter is the result left over by my parameter sensitivity experiment, and it is directly committed here then, and obviously this parameter is not used when running the program.
I can only guarantee that the general framework is correct, not that running directly is feasible. If u r not in hurry, I will upload the latest executable source code the first I return to school in September.

@ming71
Copy link
Owner

ming71 commented Jul 24, 2020

er the detections of this network seems like has no relationship with object in images. it seems like this network doesn't learn anything from these im

hello, refer to this reply.

@jingweirobot
Copy link
Author

Thanks for your reply. OK. I see. We imagine the default setting released can realize an acceptable performance relatively. May be not the best one 80%, may be 70% or others. Thus, we just re-implement this experiments according to the repo released (default setting).

So could you upload your setting file such as the specific parameters? We want to try again.

@ming71
Copy link
Owner

ming71 commented Jul 24, 2020

Thanks for your reply. OK. I see. We imagine the default setting released can realize an acceptable performance relatively. May be not the best one 80%, may be 70% or others. Thus, we just re-implement this experiments according to the repo released (default setting).

So could you upload your setting file such as the specific parameters? We want to try again.

Sorry, the codes were left on the computer in the school laboratory. I haven't touched this program for half a year due to the epidemic COVID-19, thus I can't recall many details. In my impression, the following points should be paid attention to:

  1. hyp settings: iou thres, angle thres, lr(=lr0*multiplier)
  2. sampler: NoSampler is stable.
  3. loss: GHM_Loss is unstable!
  4. Receptive field module and global attention module are sensitive to hyper-parameter, but even without the use of these modules you can still achieve a fine results (at least 70 for hrsc)
  5. Use data augment!!!! Data enhancement will greatly improve the result for this model.(refer to augment.py)
    Besides, multi-category detection is not supported now, it's not hard to realize(modify code in from line 486 to 521).

@ming71 ming71 added the good first issue Good for newcomers label Jul 24, 2020
@WJ1214
Copy link

WJ1214 commented Jul 24, 2020

Thanks for your reply. OK. I see. We imagine the default setting released can realize an acceptable performance relatively. May be not the best one 80%, may be 70% or others. Thus, we just re-implement this experiments according to the repo released (default setting).
So could you upload your setting file such as the specific parameters? We want to try again.

Sorry, the codes were left on the computer in the school laboratory. I haven't touched this program for half a year due to the epidemic COVID-19, thus I can't recall many details. In my impression, the following points should be paid attention to:

  1. hyp settings: iou thres, angle thres, lr(=lr0*multiplier)
  2. sampler: NoSampler is stable.
  3. loss: GHM_Loss is unstable!
  4. Receptive field module and global attention module are sensitive to hyper-parameter, but even without the use of these modules you can still achieve a fine results (at least 70 for hrsc)
  5. Use data augment!!!! Data enhancement will greatly improve the result for this model.(refer to augment.py)
    Besides, multi-category detection is not supported now, it's not hard to realize(modify code in from line 486 to 521).

thank you, i think it is helpful for me

@stevenXS
Copy link

@WJ1214 Hello, have you implemented the multi-category code?

@WJ1214
Copy link

WJ1214 commented Oct 14, 2020

@WJ1214 Hello, have you implemented the multi-category code?

I'm not sure what is "multi-category code"? In my dataset, there is only two categories: object and background, and I just changed the config file as YOLOv3, then this project worked.

@stevenXS
Copy link

stevenXS commented Oct 14, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

4 participants