Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tobj bug #1799

Closed
Gaondong opened this issue Jun 28, 2021 · 8 comments
Closed

tobj bug #1799

Gaondong opened this issue Jun 28, 2021 · 8 comments
Labels
duplicate This issue or pull request already exists Stale

Comments

@Gaondong
Copy link

Objectness

Hi:
IN this repo, An anchor is probably matched to not just one Groundtruth in crowd scence, so the [b, a, gj, gi] could repeat and repeat assign in tobj because do not remove the repeat [b, a, gj, gi] in build target.
tobj[b, a, gj, gi] = (1.0 - model.gr) + model.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio

@Gaondong Gaondong added the bug Something isn't working label Jun 28, 2021
@glenn-jocher glenn-jocher added duplicate This issue or pull request already exists and removed bug Something isn't working labels Jun 28, 2021
@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 28, 2021

@Gaondong
Copy link
Author

Gaondong commented Jun 28, 2021

@Gaondong yes that is correct. Duplicate of ultralytics/yolov5#3605

OK, but I have a question, should we either remove the repeat indices in build_targets or add the repeat indices loss in objectness loss to calculate all matched targets ?

Thanks!

@glenn-jocher
Copy link
Member

@Gaondong it's an open question really.

The current implementation is obviously working well, the real question is if there is an alternative implementation that might work better.

cls and box loss apply losses for all anchor-target matches, whereas obj loss operates differently, it currently only applies a single loss if multiple matches are made.

We could write code to treat obj similarly to cls and box, though I don't know what effect that might have (i.e. could lead to higher rate of FPs without lowering the obj hyperparameter gain down), as typically one target will have a high multiplicity (especially in YOLOv5) with perhaps 3-6 anchors matching it.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 28, 2021

@Gaondong but to be clear, the multiple indices are no harm, they are simply assigning multiple positives to the same location, i.e. it's the same as if I write:

i = [0, 0, 0]
x[i] = [1.0, 1.0, 1.0]

the result will just be that x[0] = 1.0

@glenn-jocher
Copy link
Member

@XHBrain see above comments also regarding objectness. Alternative idea would be to apply all losses the same way cls and box are treated.

@Gaondong
Copy link
Author

@Gaondong but to be clear, the multiple indices are no harm, they are simply assigning multiple positives to the same location, i.e. it's the same as if I write:

i = [0, 0, 0]
x[i] = [1.0, 1.0, 1.0]

the result will just be that x[0] = 1.0

Yes! This repo performance is very impressive.
I know that the iou value assigned to tobj is increasing along with the training process, so it could be a value closed to 1.0, so it could not make effects.
By the way, Is the map@0.5 calculated by test.py in training process equal to the map@0.5 calculated by pycocotools in the same thres and test_image_size ?

Thanks for your reply.

@github-actions
Copy link

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv3 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv3 🚀 and Vision AI ⭐!

@glenn-jocher
Copy link
Member

@Gaondong Thank you for the kind words! Regarding your question, the mAP@0.5 calculated by test.py should be very close to the mAP@0.5 calculated by pycocotools given the same IoU threshold and test image size. However, slight differences may occur due to implementation details. It's always a good idea to cross-validate if exact precision is required for your application. Keep up the great work! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists Stale
Projects
None yet
Development

No branches or pull requests

2 participants