Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some issues on reproducing result of Figure 7 #373

Open
jacksonsc007 opened this issue Feb 18, 2022 · 3 comments
Open

some issues on reproducing result of Figure 7 #373

jacksonsc007 opened this issue Feb 18, 2022 · 3 comments

Comments

@jacksonsc007
Copy link

@tianzhi0549
Hi, I am really sorry to bother you after years of your excellent work. The idea of centerness is quite impressed and impel my passion for further research on it. Here is the problem i meet with.
The idea i want to corroborate concerns with IOU between prediction and its corresponding GT bbox. However, bad experiment result was got which made me call into question the correctness of my code in calculating the IOU mentioned above. Later I noticed that you also validate the validity of centerness in Figure 7. In order to test the correctness of my code, I decided to reproduce your result in Figure 7. Though many timed I tried, the experiment result differed from Figure 7 on a large scale.Here is the result I got.
On the strength of your explanation of Figure 7, each point(x,y) denotes a detected bbox before NMS, with x being its classification score and y being its iou with corresponding GT bbox. As for the iou between detected bbox and its corresponding GT bbox, for a specific prediction, the iou is calculated over all GT bboxes with the same class as this prediction and the maximum is chosen. Results are gathered on a subset of COCO val2017 data set which containing 100 images.
index

As you can see:
1.The number of samples of my results is way too more compared to Figure 7.
2. The distribution of classification score for a high IOU is quite different with yours, where dense areas are located in low classification scores while yours is opposite.
3. There is a lot samples with zero IoU which I intuitively find right, but there is seldom zero IOU in Figure 7.
Here is my question:
Is there any wrong on the idea where I calculate the IOU between predictions and its corresponding GT bbox? And why the number of samples is much less than my results? Or are there any mistakes I made?
I really appreciate your patience for reading my issues.
Thanks a lot.

@tianzhi0549
Copy link
Owner

Thank you for your questions.

for a specific prediction, the iou is calculated over all GT bboxes with the same class as this prediction and the maximum is chosen.

We did not match predicted boxes to the GT boxes with the highest IoU. Instead, we use the label assignment during training to match them, and only positive points on the feature maps are shown in the figures.

@jacksonsc007
Copy link
Author

Thank you for your reply. It helped me a lot.

@jacksonsc007
Copy link
Author

@tianzhi0549
Hi, please accept my heartfelt thanks to your instructions. I got the following results after modifications on the label assignment.
Figure7_Ink
As shown in the figure, the left part is the original classification score of model output that already through sigmoid function, while the right plot is the square root of original classification score. I did square root cuz the left one still bears some dissimilarity from yours, where most samples on regions with high IOU tends to have low classification score. This is a real pet peeve because, from my perspective, predictions with high IOU with gt boxes intuitively bear high classification score.
so my question is:

  1. Did you use square root in Figure 7?
  2. If not, is my preconception that " predictions with high IOU with gt boxes tends to bear higher classification score" right?
    I am looking forward your reply if your are available.
    Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants