Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The project should not be named yolov5! #2

Closed
amusi opened this issue May 30, 2020 · 28 comments
Closed

The project should not be named yolov5! #2

amusi opened this issue May 30, 2020 · 28 comments
Labels
enhancement New feature or request Stale

Comments

@amusi
Copy link

amusi commented May 30, 2020

  1. I think the project name should not be called yolov5
  2. Please add a link to yolov4:https://github.com/AlexeyAB/darknet
@amusi amusi added the enhancement New feature or request label May 30, 2020
@WDNMD0-0
Copy link

what you said is awesome,but you can you up,no can no bb.

@wangtianlong1994
Copy link

I suggest you write a yolov6 to beat yolov5

@Mechanic-Hwang
Copy link

Maybe yolov999999999 will be great
or yolov666666666, Thumbs up, Thumbs up, Thumbs up

@ethanyanjiali
Copy link

I'm not sure if the author is mocking YOLO v4, but why not ultralytics-net, or ultralytics-yolo? it will become much easier for us to organize our model zoo.

btw I noticed the new Chinese translation "你只看一次" on the cover page, but that sounds a bit awkward in Chinese. Personally i like to call it "一眼万年", which means YOLO just need one glance while other two-stage detectors runs like 10k years.

just jk

@Mechanic-Hwang
Copy link

I also think the translation of '你只看一次' was not good, The translation is not necessary. YOLO is great.

@IgorSusmelj
Copy link

I also think yolov5 is misleading. A new version should require a publication with proper benchmarks against previous versions and significant changes/ improvements of the architecture (not just parameter tuning and minor changes). I don't see any of this.

Why not name it yolov4-ultralytics?

@huynhbach197
Copy link

Where is your paper, publication?

@JonnoFTW
Copy link

This repository simply implements the existing published yolov4 architecture in a new framework. There is no substantial novel contribution made here to the architecture that warrants a new name.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 11, 2020

@amusi and all others regarding publication and naming:

Thank you for your feedback! Regarding publication, we would very much like to be able to publish a paper detailing the various modifications employed to achieve these results (among all 3 major components: architecture, loss function and training methodology), however we are extremely limited on resources, and thus must smartly select how best to deploy these in order to keep our business viable as a going concern, while also continuing our work of pushing the boundaries of what's possible in this and other fields.

Importantly these models are neither static nor complete at this time. Our recent open-sourcing of this work is simply part of our ongoing research, and is not any sort of final product, and for this reason it is not accompanied by any publication. Our current goal is to continue internal R&D throughout the remainder of 2020, and hopefully open source and publish at least a short synopsis of this to Arxiv by the end of the year.

We have published several peer reviewed works prior as a company, and myself personally as well (Google Scholar), mostly in particle physics and antineutrino detection. Our most notable publication was the world's first Antineutrino Global Map, published in Nature, Scientific Reports as part of ongoing work Ultralytics performed for the U.S. National Geospatial-Intelligence Agency.

It was our work reconstructing neutrino events which led us to develop an ancillary interest in AI, and in this field we are pursuing research that is both transparent and reproducible, which builds on years of hard work (by myself and many community contributors) porting and perfecting YOLOv3 to PyTorch from Darknet, which pjreddie amazingly began and @AlexeyAB so impressively continued and advanced. Most importantly in following the spirit of YOLO, we are targeting results that we hope may be in reach of every individual who would express an interest in the field, not merely available to those with unlimited resources at their disposal. Our smallest model, to cite an example, trains on COCO in only 3 days on one 2080Ti, and runs inference faster and more accurately than EfficientDet D0, which was trained on 32 TPUv3 cores by the Google Brain team. By extension we aim to comparably exceed D1, D2 etc. with the rest of the YOLOv5 family.

Regarding naming, we do take note of the comments surrounding the issue. YOLOv5 is an internal designation assigned to this work, which is now open-sourced. The exact name employed here is not a concern for us (we are open to alternatives!), we are instead focused on producing, improving, and delivering results to our clients, and to the open-source community by extension when our contract terms allow for it, which we push for often.

We appreciate all feedback, and we will try to be as responsive as we can going forward!

@ultralytics ultralytics deleted a comment from github-actions bot Jun 11, 2020
@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 11, 2020

Ah, yes, many people were wondering about the Chinese translation. The story behind this is that we noticed that more than half of all visitors came from China, so we thought it would it would be nice to display a Chinese translation as a welcome for them.

Screen Shot 2020-06-10 at 11 27 24 PM

We did this by using Google Translate.
Screen Shot 2020-06-10 at 11 32 35 PM

It is very interesting that China represents such a large fraction of visits. It is a trend I noticed also in physics, with China investing heavily in numerous advanced research projects, including some worldwide SOTA examples: the Daya Bay and JUNO neutrino oscillation experiments, and the 500 meter FAST radio telescope. As a scientist I'm very happy to see this, as I believe knowledge discovered by any people will benefit all people.

@johntiger1
Copy link

@PuneetKohli
Copy link

The owners of this repo should seriously consider renaming it. Congratulations folks, you got the hype you wanted by misusing the YOLO name, now please change it to something more meaningful and appropriate. For what it's worth, your implementation seems to have runtime improvements, so call it something like yolov4_pytorch_faster or whatever else floats your boat.

@Zhangxiaof001
Copy link

很多人都在关注YOLO

@lucasjinreal
Copy link

I think yolov5 is OK, no matter how fancy model it use, but final AP tells the whole story.

@clw5180
Copy link

clw5180 commented Jun 12, 2020

请问有哪位老哥测试过效果如何?

@theblackcat102
Copy link

theblackcat102 commented Jun 12, 2020

@clw5180 you can refer to this comparison of YOLOv3, YOLOv4, "YOLOv5"
WongKinYiu/CrossStagePartialNetworks#32 (comment)

@WongKinYiu
Copy link

@theblackcat102

The comparison is a little bit out-of-date. At the time, the comparison use May 27 YOLOv5. Now, YOLOv5 is updated to June 9 version.
image
It is better to see the comparison in #6.

@huynhbach197
Copy link

FA0F91DD-618A-4B4B-93EB-EC4DEC86B73A

@glenn-jocher
Copy link
Member

@huynhbach197 to be clear, all 3 of these models are trained with ultralytics repositories. Darknet yolov4 AP is much lower than these at 43.5 (which is not possible to see, since the poster strangely removed the y axis).

@glenn-jocher
Copy link
Member

Independent analysis by Roboflow:
https://blog.roboflow.ai/yolov4-versus-yolov5/

@sunny0315
Copy link

Can the comparison of the following two pictures show that yolo5 has higher accuracy?(detectron2 faster-rcnn vs yolo5)
image
image

@Franklin-Yao
Copy link

what you said is awesome,but you can you up,no can no bb.

Dude, don't be rude.

@berserkr
Copy link

Yolo9000 is taken!

I think the project name should not be called yolov5
Please add a link to yolov4:https://github.com/AlexeyAB/darknet

Call it YoloPlusUltra! Since Yolo9000 is taken :)

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@chonyy
Copy link

chonyy commented Nov 1, 2020

Sorry but I still disagree with the idea of calling this repo Yolov5, it's just so confusing.

@dmvictor
Copy link

I took some time to figure out the relationship between YOLOv4 & v5. This article might be helpful for someone who has the same question.

lebedevdes pushed a commit to lebedevdes/yolov5 that referenced this issue Feb 26, 2022
Two changes provided
1. Added limit on the maximum number of detections for each image likewise pycocotools
2. Rework process_batch function

Changes ultralytics#2 solved issue ultralytics#4251
I also independently encountered the problem described in issue ultralytics#4251 that the values for the same thresholds do not match when changing the limits in the torch.linspace function.
These changes solve this problem.

Currently during validation yolov5x.pt model the following results were obtained:
from yolov5 validation
               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100%|██████████| 157/157 [01:07<00:00,  2.33it/s]
                 all       5000      36335      0.743      0.626      0.682      0.506
from pycocotools
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.505
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.685

These results are very close, although not completely pass the competition issue ultralytics#2258.
I think it's problem with false positive bboxes matched ignored criteria, but this is not actual for custom datasets and does not require an additional solution.
LynnL4 referenced this issue in Seeed-Studio/yolov5-swift Apr 20, 2022
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 2 to 3.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](actions/setup-python@v2...v3)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
glenn-jocher added a commit that referenced this issue May 20, 2022
* Improve mAP0.5-0.95

Two changes provided
1. Added limit on the maximum number of detections for each image likewise pycocotools
2. Rework process_batch function

Changes #2 solved issue #4251
I also independently encountered the problem described in issue #4251 that the values for the same thresholds do not match when changing the limits in the torch.linspace function.
These changes solve this problem.

Currently during validation yolov5x.pt model the following results were obtained:
from yolov5 validation
               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100%|██████████| 157/157 [01:07<00:00,  2.33it/s]
                 all       5000      36335      0.743      0.626      0.682      0.506
from pycocotools
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.505
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.685

These results are very close, although not completely pass the competition issue #2258.
I think it's problem with false positive bboxes matched ignored criteria, but this is not actual for custom datasets and does not require an additional solution.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Remove line to retain pycocotools results

* Update val.py

* Update val.py

* Remove to device op

* Higher precision int conversion

* Update val.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
tdhooghe pushed a commit to tdhooghe/yolov5 that referenced this issue Jun 10, 2022
* Improve mAP0.5-0.95

Two changes provided
1. Added limit on the maximum number of detections for each image likewise pycocotools
2. Rework process_batch function

Changes ultralytics#2 solved issue ultralytics#4251
I also independently encountered the problem described in issue ultralytics#4251 that the values for the same thresholds do not match when changing the limits in the torch.linspace function.
These changes solve this problem.

Currently during validation yolov5x.pt model the following results were obtained:
from yolov5 validation
               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100%|██████████| 157/157 [01:07<00:00,  2.33it/s]
                 all       5000      36335      0.743      0.626      0.682      0.506
from pycocotools
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.505
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.685

These results are very close, although not completely pass the competition issue ultralytics#2258.
I think it's problem with false positive bboxes matched ignored criteria, but this is not actual for custom datasets and does not require an additional solution.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Remove line to retain pycocotools results

* Update val.py

* Update val.py

* Remove to device op

* Higher precision int conversion

* Update val.py

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
johnng0805 added a commit to johnng0805/yolov5 that referenced this issue Jun 26, 2022
@Curt-Park
Copy link

Curt-Park commented Jun 27, 2022

YOLOv5
YOLOv6
YOLOv7
...
Eventually,
https://github.com/juunini/YOLOv1972

manole-alexandru added a commit to manole-alexandru/yolov5-uolo that referenced this issue Mar 20, 2023
manole-alexandru added a commit to manole-alexandru/yolov5-uolo that referenced this issue Mar 25, 2023
manole-alexandru added a commit to manole-alexandru/yolov5-uolo that referenced this issue Mar 25, 2023
@glenn-jocher
Copy link
Member

@Curt-Park thanks for the suggestion! The YOLO naming convention has indeed gained widespread recognition. However, it's important to acknowledge the achievements of the YOLO community and the active contributions from each iteration. If you have any more feedback or questions related to YOLOv5, feel free to share!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Stale
Projects
None yet
Development

No branches or pull requests