From f6633ca7db5bbf14f43226d3879a56eada44de76 Mon Sep 17 00:00:00 2001 From: Kalen Michael Date: Tue, 8 Jun 2021 16:07:01 +0200 Subject: [PATCH 01/34] Update README.md --- README.md | 290 ++++++++++++++++++++++++++++++++---------------------- 1 file changed, 170 insertions(+), 120 deletions(-) diff --git a/README.md b/README.md index 3a785cc85003..77e46f1a2ea9 100755 --- a/README.md +++ b/README.md @@ -1,102 +1,60 @@ - - -  - +# [YOLOv5](https://ultralytics.com/yolov5) by [Ultralytics](https://ultralytics.com) + +
+

+ +

+
+
CI CPU testing +Open In Kaggle +
+Open In Colab +Open In Kaggle +Docker Pulls +
-This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk. +
+

+YOLOv5 is a family of object detection architectures and models pretrained on the COCO dataset. This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. +

+
-

-
- YOLOv5-P5 640 Figure (click to expand) - -

-
-
- Figure Notes (click to expand) - - * GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. - * EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8. - * **Reproduce** by `python test.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt` -
+###
[See YOLOv5 in Action with Our Interactive Demo Here](https://ultralytics.com/yolov5)
-- **April 11, 2021**: [v5.0 release](https://github.com/ultralytics/yolov5/releases/tag/v5.0): YOLOv5-P6 1280 models, [AWS](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart), [Supervise.ly](https://github.com/ultralytics/yolov5/issues/2518) and [YouTube](https://github.com/ultralytics/yolov5/pull/2752) integrations. -- **January 5, 2021**: [v4.0 release](https://github.com/ultralytics/yolov5/releases/tag/v4.0): nn.SiLU() activations, [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) logging, [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/) integration. -- **August 13, 2020**: [v3.0 release](https://github.com/ultralytics/yolov5/releases/tag/v3.0): nn.Hardswish() activations, data autodownload, native AMP. -- **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP. +_Note : YOLOv5 is current **under active development**, all code, models, and documentation are subject to modification or deletion without notice. **Use at your own risk.**_ +##
Documentation
-## Pretrained Checkpoints +Check out our [Full Documentation](https://docs.ultralytics.com) or use our Quick Start Tutorials. -[assets]: https://github.com/ultralytics/yolov5/releases +##
Quick Start Tutorials
-|Model |size
(pixels) |mAPval
0.5:0.95 |mAPtest
0.5:0.95 |mAPval
0.5 |Speed
V100 (ms) | |params
(M) |FLOPs
640 (B) -|--- |--- |--- |--- |--- |--- |---|--- |--- -|[YOLOv5s][assets] |640 |36.7 |36.7 |55.4 |**2.0** | |7.3 |17.0 -|[YOLOv5m][assets] |640 |44.5 |44.5 |63.1 |2.7 | |21.4 |51.3 -|[YOLOv5l][assets] |640 |48.2 |48.2 |66.9 |3.8 | |47.0 |115.4 -|[YOLOv5x][assets] |640 |**50.4** |**50.4** |**68.8** |6.1 | |87.7 |218.8 -| | | | | | || | -|[YOLOv5s6][assets] |1280 |43.3 |43.3 |61.9 |**4.3** | |12.7 |17.4 -|[YOLOv5m6][assets] |1280 |50.5 |50.5 |68.7 |8.4 | |35.9 |52.4 -|[YOLOv5l6][assets] |1280 |53.4 |53.4 |71.1 |12.3 | |77.2 |117.7 -|[YOLOv5x6][assets] |1280 |**54.4** |**54.4** |**72.0** |22.4 | |141.8 |222.9 -| | | | | | || | -|[YOLOv5x6][assets] TTA |1280 |**55.0** |**55.0** |**72.0** |70.8 | |- |- +These tutorials are intended to get you started using YOLOv5 quickly for demonstration purposes. +Head to the [Full Documentation](https://docs.ultralytics.com) for more in-depth tutorials.
- Table Notes (click to expand) - - * APtest denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy. - * AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` - * SpeedGPU averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45` - * All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation). - * Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python test.py --data coco.yaml --img 1536 --iou 0.7 --augment` -
- + +Install Locally + -## Requirements - -Python 3.8 or later with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) dependencies installed, including `torch>=1.7`. To install run: - ```bash +# Clone into current directory +$ git clone git@github.com:ultralytics/yolov5.git . +# Install requirements $ pip install -r requirements.txt ``` + +
+Inference Using Repository Clone -## Tutorials - -* [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)  🚀 RECOMMENDED -* [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)  ☘️ RECOMMENDED -* [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)  🌟 NEW -* [Supervisely Ecosystem](https://github.com/ultralytics/yolov5/issues/2518)  🌟 NEW -* [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475) -* [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)  ⭐ NEW -* [TorchScript, ONNX, CoreML Export](https://github.com/ultralytics/yolov5/issues/251) 🚀 -* [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303) -* [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318) -* [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304) -* [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607) -* [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314)  ⭐ NEW -* [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx) - - -## Environments - -YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled): - -- **Google Colab and Kaggle** notebooks with free GPU: Open In Colab Open In Kaggle -- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart) -- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart) -- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) Docker Pulls - - -## Inference +_NOTE : In order to follow this tutorial please ensure you have installed YOLOv5 locally._ -`detect.py` runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`. ```bash +# Run inference based on selected input $ python detect.py --source 0 # webcam - file.jpg # image + file.jpg # image file.mp4 # video path/ # directory path/*.jpg # glob @@ -104,67 +62,159 @@ $ python detect.py --source 0 # webcam 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream ``` -To run inference on example images in `data/images`: -```bash -$ python detect.py --source data/images --weights yolov5s.pt --conf 0.25 - -Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', project='runs/detect', save_conf=False, save_txt=False, source='data/images/', update=False, view_img=False, weights=['yolov5s.pt']) -YOLOv5 v4.0-96-g83dc1b4 torch 1.7.0+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB) - -Fusing layers... -Model Summary: 224 layers, 7266973 parameters, 0 gradients, 17.0 GFLOPs -image 1/2 /content/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, Done. (0.010s) -image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 2 persons, 1 tie, Done. (0.011s) -Results saved to runs/detect/exp2 -Done. (0.103s) -``` - +
+
+Inference Using PyTorch Hub -### PyTorch Hub +This tutorial will automatically download YOLOv5 to your local system before running inference on the supplied image. -Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36): ```python import torch -# Model +# Define your model, options include yolov5s, yolov5m, yolov5l, yolov5x model = torch.hub.load('ultralytics/yolov5', 'yolov5s') -# Image +# Define your image img = 'https://ultralytics.com/images/zidane.jpg' -# Inference +# Run inference results = model(img) -results.print() # or .show(), .save() + +# Handle your results, options include .print(), .show(), .save(), .panadas().xyz() +results.print() ``` +
+ +
+Training -## Training +_NOTE : In order to follow this tutorial please ensure you have installed YOLOv5 locally._ -Run commands below to reproduce results on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices). ```bash $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64 yolov5m 40 yolov5l 24 yolov5x 16 -``` - - - -## Citation -[![DOI](https://zenodo.org/badge/264818686.svg)](https://zenodo.org/badge/latestdoi/264818686) - - -## About Us - -Ultralytics is a U.S.-based particle physics and AI startup with over 6 years of expertise supporting government, academic and business clients. We offer a wide range of vision AI services, spanning from simple expert advice up to delivery of fully customized, end-to-end production solutions, including: -- **Cloud-based AI** systems operating on **hundreds of HD video streams in realtime.** -- **Edge AI** integrated into custom iOS and Android apps for realtime **30 FPS video inference.** -- **Custom data training**, hyperparameter evolution, and model exportation to any destination. +``` -For business inquiries and professional support requests please visit us at https://ultralytics.com. +
+##
Environments and Integrations
-## Contact +Get started with YOLOv5 in less than a few minutes using our integrations. -**Issues should be raised directly in the repository.** For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com. + + + + +Add these your toolkit to ensure you get the most out of your training experience: + +* [Weight and Biasis](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) - Debug, compare and reproduce models. Easily visualize performance with powerful custom charts. +* [Supervisely](https://app.supervise.ly/signup) - Data labeling for images, videos, 3D point cloud, and volumetric medical images + +##
Contribue and Win
+ +We are super excited to announce our first-ever Ultralytics YOLOv5 rocket EXPORT Competition with **$10,000** in cash prizes! + +
+ + + +
+ +##
Why YOLOv5
+ +
+ +**Its Fast!** +**Its Accurate!** +**But above all YOLOv5 is super easy to get up and running due to its PyTorch integration.** + +
+ +
+
+ +
+ +### Pretrained Checkpoints + +| Model | size
(pixels) | mAPval
0.5:0.95 | mAPtest
0.5:0.95 | mAPval
0.5 | Speed
V100 (ms) | | params
(M) | FLOPS
640 (B) | +| ---------------------- | --------------------- | ----------------------- | ------------------------ | ------------------ | ----------------------- | --- | ------------------ | --------------------- | +| [YOLOv5s][assets] | 640 | 36.7 | 36.7 | 55.4 | **2.0** | | 7.3 | 17.0 | +| [YOLOv5m][assets] | 640 | 44.5 | 44.5 | 63.1 | 2.7 | | 21.4 | 51.3 | +| [YOLOv5l][assets] | 640 | 48.2 | 48.2 | 66.9 | 3.8 | | 47.0 | 115.4 | +| [YOLOv5x][assets] | 640 | **50.4** | **50.4** | **68.8** | 6.1 | | 87.7 | 218.8 | +| | | | | | | | | +| [YOLOv5s6][assets] | 1280 | 43.3 | 43.3 | 61.9 | **4.3** | | 12.7 | 17.4 | +| [YOLOv5m6][assets] | 1280 | 50.5 | 50.5 | 68.7 | 8.4 | | 35.9 | 52.4 | +| [YOLOv5l6][assets] | 1280 | 53.4 | 53.4 | 71.1 | 12.3 | | 77.2 | 117.7 | +| [YOLOv5x6][assets] | 1280 | **54.4** | **54.4** | **72.0** | 22.4 | | 141.8 | 222.9 | +| | | | | | | | | +| [YOLOv5x6][assets] TTA | 1280 | **55.0** | **55.0** | **72.0** | 70.8 | | - | - | + +
+ +##
Getting Involved and Contributing
+ +Please make sure to read the [Contributing Guide](CONTRIBUTING.md) before making a pull request. + +**Thank you to all the people who already contributed to YOLOv5!** + +Issues should be raised in [GitHub Issues](https://github.com/ultralytics/yolov5/issues) provided yours does not already exist. + +##
Get in Touch
+ +**For issues or trouble running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues) and create a new issue provided yours does not already exist.** +
+For business or professional support requests please visit: +[https://ultralytics.com/contact](https://ultralytics.com/contact) + +
+ + From 4b1deb3f6467e00811421bc14bae578a4c68757b Mon Sep 17 00:00:00 2001 From: Kalen Michael Date: Tue, 8 Jun 2021 16:25:20 +0200 Subject: [PATCH 02/34] added hosted images --- README.md | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 77e46f1a2ea9..52e91c1eb7af 100755 --- a/README.md +++ b/README.md @@ -107,22 +107,22 @@ Get started with YOLOv5 in less than a few minutes using our integrations. @@ -139,7 +139,7 @@ We are super excited to announce our first-ever Ultralytics YOLOv5 rocket EXPORT @@ -195,26 +195,26 @@ For business or professional support requests please visit: From d13652e6a9dab003f595e3ec2fe4e795b6098aa7 Mon Sep 17 00:00:00 2001 From: Kalen Michael Date: Tue, 8 Jun 2021 16:28:20 +0200 Subject: [PATCH 03/34] added new logo --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 52e91c1eb7af..035c98bccff2 100755 --- a/README.md +++ b/README.md @@ -1,5 +1,7 @@ -# [YOLOv5](https://ultralytics.com/yolov5) by [Ultralytics](https://ultralytics.com) - +
+ + +

From 48b1afff3e8e4e6d13a53acca2f42f37f0cecd32 Mon Sep 17 00:00:00 2001 From: Kalen Michael Date: Tue, 8 Jun 2021 16:30:21 +0200 Subject: [PATCH 04/34] testing image hosting --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 035c98bccff2..1d10fc7b21d2 100755 --- a/README.md +++ b/README.md @@ -110,6 +110,7 @@ Get started with YOLOv5 in less than a few minutes using our integrations.

+ From 7d14359ad39406172bb349a8e6cff6f43601b6ad Mon Sep 17 00:00:00 2001 From: Kalen Michael Date: Tue, 8 Jun 2021 17:09:40 +0200 Subject: [PATCH 05/34] changed svgs to pngs --- README.md | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 1d10fc7b21d2..beebaea136dc 100755 --- a/README.md +++ b/README.md @@ -109,23 +109,22 @@ Get started with YOLOv5 in less than a few minutes using our integrations. @@ -198,26 +197,26 @@ For business or professional support requests please visit: From c3eedd5f0cfa320b2b681ab4dd8cd22e3c9d9f46 Mon Sep 17 00:00:00 2001 From: Kalen Michael Date: Tue, 8 Jun 2021 17:13:47 +0200 Subject: [PATCH 06/34] removed old header --- README.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/README.md b/README.md index beebaea136dc..54349fb2b695 100755 --- a/README.md +++ b/README.md @@ -1,10 +1,7 @@
+

-

-
-

-


From 958171a2caac4bf4f0cf086cbd8b856e7d7b4fae Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Tue, 8 Jun 2021 17:19:41 +0200 Subject: [PATCH 07/34] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 54349fb2b695..aeb9a0927ee0 100755 --- a/README.md +++ b/README.md @@ -159,6 +159,8 @@ We are super excited to announce our first-ever Ultralytics YOLOv5 rocket EXPORT ### Pretrained Checkpoints +[assets]: https://github.com/ultralytics/yolov5/releases + | Model | size
(pixels) | mAPval
0.5:0.95 | mAPtest
0.5:0.95 | mAPval
0.5 | Speed
V100 (ms) | | params
(M) | FLOPS
640 (B) | | ---------------------- | --------------------- | ----------------------- | ------------------------ | ------------------ | ----------------------- | --- | ------------------ | --------------------- | | [YOLOv5s][assets] | 640 | 36.7 | 36.7 | 55.4 | **2.0** | | 7.3 | 17.0 | From 649205011a2292b33d8b9e9d501122158e83aee1 Mon Sep 17 00:00:00 2001 From: Kalen Michael Date: Tue, 8 Jun 2021 17:25:51 +0200 Subject: [PATCH 08/34] correct colab image source --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index aeb9a0927ee0..a908d88a3e7f 100755 --- a/README.md +++ b/README.md @@ -106,7 +106,7 @@ Get started with YOLOv5 in less than a few minutes using our integrations.
- + From fdc715424d4e348427ec82ca802c6f83d4880852 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Wed, 9 Jun 2021 12:27:00 +0200 Subject: [PATCH 09/34] splash.jpg --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a908d88a3e7f..b7360ed222db 100755 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@

- +


From bfd5578b1d0a132edc9be57a050cafeedeae5480 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Thu, 10 Jun 2021 20:34:20 +0200 Subject: [PATCH 10/34] rocket and W&B fix --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index b7360ed222db..f24a3e5d8bb5 100755 --- a/README.md +++ b/README.md @@ -129,12 +129,12 @@ Get started with YOLOv5 in less than a few minutes using our integrations. Add these your toolkit to ensure you get the most out of your training experience: -* [Weight and Biasis](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) - Debug, compare and reproduce models. Easily visualize performance with powerful custom charts. +* [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) - Debug, compare and reproduce models. Easily visualize performance with powerful custom charts. * [Supervisely](https://app.supervise.ly/signup) - Data labeling for images, videos, 3D point cloud, and volumetric medical images ##
Contribue and Win
-We are super excited to announce our first-ever Ultralytics YOLOv5 rocket EXPORT Competition with **$10,000** in cash prizes! +We are super excited to announce our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10000.00** in cash prizes!
From 1e699d0b81df75bb8750e4f5a29b572bf45a7d19 Mon Sep 17 00:00:00 2001 From: Kalen Michael Date: Fri, 11 Jun 2021 10:09:09 +0200 Subject: [PATCH 11/34] added contributing template --- CONTRIBUTING.md | 54 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) create mode 100644 CONTRIBUTING.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 000000000000..4ca3500da4a5 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,54 @@ +# Contributing to Transcriptase +We love your input! We want to make contributing to this project as easy and transparent as possible, whether it's: + +- Reporting a bug +- Discussing the current state of the code +- Submitting a fix +- Proposing new features +- Becoming a maintainer + +## We Develop with Github +We use github to host code, to track issues and feature requests, as well as accept pull requests. + +## We Use [Github Flow](https://guides.github.com/introduction/flow/index.html), So All Code Changes Happen Through Pull Requests +Pull requests are the best way to propose changes to the codebase (we use [Github Flow](https://guides.github.com/introduction/flow/index.html)). We actively welcome your pull requests: + +1. Fork the repo and create your branch from `master`. +2. If you've added code that should be tested, add tests. +3. If you've changed APIs, update the documentation. +4. Ensure the test suite passes. +5. Make sure your code lints. +6. Issue that pull request! + +## Any contributions you make will be under the MIT Software License +In short, when you submit code changes, your submissions are understood to be under the same [MIT License](http://choosealicense.com/licenses/mit/) that covers the project. Feel free to contact the maintainers if that's a concern. + +## Report bugs using Github's [issues](https://github.com/briandk/transcriptase-atom/issues) +We use GitHub issues to track public bugs. Report a bug by [opening a new issue](); it's that easy! + +## Write bug reports with detail, background, and sample code +[This is an example](http://stackoverflow.com/q/12488905/180626) of a bug report I wrote, and I think it's not a bad model. Here's [another example from Craig Hockenberry](http://www.openradar.me/11905408), an app developer whom I greatly respect. + +**Great Bug Reports** tend to have: + +- A quick summary and/or background +- Steps to reproduce + - Be specific! + - Give sample code if you can. [My stackoverflow question](http://stackoverflow.com/q/12488905/180626) includes sample code that *anyone* with a base R setup can run to reproduce what I was seeing +- What you expected would happen +- What actually happens +- Notes (possibly including why you think this might be happening, or stuff you tried that didn't work) + +People *love* thorough bug reports. I'm not even kidding. + +## Use a Consistent Coding Style +I'm again borrowing these from [Facebook's Guidelines](https://github.com/facebook/draft-js/blob/a9316a723f9e918afde44dea68b5f9f39b7d9b00/CONTRIBUTING.md) + +* 2 spaces for indentation rather than tabs +* You can try running `npm run lint` for style unification + +## License +By contributing, you agree that your contributions will be licensed under its MIT License. + +## References +This document was adapted from the open-source contribution guidelines for [Facebook's Draft](https://github.com/facebook/draft-js/blob/a9316a723f9e918afde44dea68b5f9f39b7d9b00/CONTRIBUTING.md) From a3164daec519b8c1ed08179399a10934c89e2e40 Mon Sep 17 00:00:00 2001 From: Kalen Michael Date: Fri, 11 Jun 2021 10:12:27 +0200 Subject: [PATCH 12/34] added social media to top section --- README.md | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/README.md b/README.md index f24a3e5d8bb5..db21ff0cb4fc 100755 --- a/README.md +++ b/README.md @@ -11,6 +11,31 @@ Open In Colab Open In Kaggle Docker Pulls +
+
From c321a9905f47eb40db7a6bf42c7060f78d3df6c7 Mon Sep 17 00:00:00 2001 From: Kalen Michael Date: Fri, 11 Jun 2021 10:13:33 +0200 Subject: [PATCH 13/34] increased size of top social media --- README.md | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index db21ff0cb4fc..6573945b4406 100755 --- a/README.md +++ b/README.md @@ -12,29 +12,30 @@ Open In Kaggle Docker Pulls
+
From f35bda3b6786784f9266f98b377443509251643a Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 14:11:36 +0200 Subject: [PATCH 14/34] cleanup and updates --- README.md | 136 +++++++++++++++++++++++++++++------------------------- 1 file changed, 73 insertions(+), 63 deletions(-) diff --git a/README.md b/README.md index 6573945b4406..927990f2ef80 100755 --- a/README.md +++ b/README.md @@ -15,48 +15,53 @@

-YOLOv5 is a family of object detection architectures and models pretrained on the COCO dataset. This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. +YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development. YOLOv5 is current under active development, and all code, models, and documentation are subject to modification or deletion without notice. Use at your own risk.

-###
[See YOLOv5 in Action with Our Interactive Demo Here](https://ultralytics.com/yolov5)
+###
Try the API
+
Instantly run YOLOv5 models using our JSON API.
https://ultralytics.com/yolov5
+ + + + + -_Note : YOLOv5 is current **under active development**, all code, models, and documentation are subject to modification or deletion without notice. **Use at your own risk.**_ ##
Documentation
-Check out our [Full Documentation](https://docs.ultralytics.com) or use our Quick Start Tutorials. +See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment. ##
Quick Start Tutorials
These tutorials are intended to get you started using YOLOv5 quickly for demonstration purposes. -Head to the [Full Documentation](https://docs.ultralytics.com) for more in-depth tutorials. +Head to the [YOLOv5 Docs](https://docs.ultralytics.com) for more in-depth details.
@@ -64,9 +69,7 @@ Install Locally ```bash -# Clone into current directory -$ git clone git@github.com:ultralytics/yolov5.git . -# Install requirements +$ git clone https://github.com/ultralytics/yolov5 $ pip install -r requirements.txt ``` @@ -91,22 +94,22 @@ $ python detect.py --source 0 # webcam
Inference Using PyTorch Hub -This tutorial will automatically download YOLOv5 to your local system before running inference on the supplied image. +This tutorial will automatically download YOLOv5 models before running inference on the supplied image. ```python import torch -# Define your model, options include yolov5s, yolov5m, yolov5l, yolov5x -model = torch.hub.load('ultralytics/yolov5', 'yolov5s') +# Load a model +model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x -# Define your image +# Define images img = 'https://ultralytics.com/images/zidane.jpg' # Run inference results = model(img) -# Handle your results, options include .print(), .show(), .save(), .panadas().xyz() -results.print() +# Handle results +results.print() # or .show(), .save(), .pandas().xyz() ```
@@ -155,68 +158,75 @@ Get started with YOLOv5 in less than a few minutes using our integrations. Add these your toolkit to ensure you get the most out of your training experience: -* [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) - Debug, compare and reproduce models. Easily visualize performance with powerful custom charts. -* [Supervisely](https://app.supervise.ly/signup) - Data labeling for images, videos, 3D point cloud, and volumetric medical images +* [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) - Build better models faster with experiment tracking, dataset versioning, and model management. +* [Supervisely](https://app.supervise.ly/signup) - Training and deployment platform for YOLOv5 models. -##
Contribue and Win
+##
Contribute and Win
-We are super excited to announce our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10000.00** in cash prizes! +We are super excited to announce our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes! ##
Why YOLOv5
-
- -**Its Fast!** -**Its Accurate!** -**But above all YOLOv5 is super easy to get up and running due to its PyTorch integration.** - -
- -
-
- -
+

+
+ YOLOv5-P5 640 Figure (click to expand) + +

+
+
+ Figure Notes (click to expand) + + * GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. + * EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8. + * **Reproduce** by `python test.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt` +
### Pretrained Checkpoints [assets]: https://github.com/ultralytics/yolov5/releases -| Model | size
(pixels) | mAPval
0.5:0.95 | mAPtest
0.5:0.95 | mAPval
0.5 | Speed
V100 (ms) | | params
(M) | FLOPS
640 (B) | -| ---------------------- | --------------------- | ----------------------- | ------------------------ | ------------------ | ----------------------- | --- | ------------------ | --------------------- | -| [YOLOv5s][assets] | 640 | 36.7 | 36.7 | 55.4 | **2.0** | | 7.3 | 17.0 | -| [YOLOv5m][assets] | 640 | 44.5 | 44.5 | 63.1 | 2.7 | | 21.4 | 51.3 | -| [YOLOv5l][assets] | 640 | 48.2 | 48.2 | 66.9 | 3.8 | | 47.0 | 115.4 | -| [YOLOv5x][assets] | 640 | **50.4** | **50.4** | **68.8** | 6.1 | | 87.7 | 218.8 | -| | | | | | | | | -| [YOLOv5s6][assets] | 1280 | 43.3 | 43.3 | 61.9 | **4.3** | | 12.7 | 17.4 | -| [YOLOv5m6][assets] | 1280 | 50.5 | 50.5 | 68.7 | 8.4 | | 35.9 | 52.4 | -| [YOLOv5l6][assets] | 1280 | 53.4 | 53.4 | 71.1 | 12.3 | | 77.2 | 117.7 | -| [YOLOv5x6][assets] | 1280 | **54.4** | **54.4** | **72.0** | 22.4 | | 141.8 | 222.9 | -| | | | | | | | | -| [YOLOv5x6][assets] TTA | 1280 | **55.0** | **55.0** | **72.0** | 70.8 | | - | - | +|Model |size
(pixels) |mAPval
0.5:0.95 |mAPtest
0.5:0.95 |mAPval
0.5 |Speed
V100 (ms) | |params
(M) |FLOPs
640 (B) +|--- |--- |--- |--- |--- |--- |---|--- |--- +|[YOLOv5s][assets] |640 |36.7 |36.7 |55.4 |**2.0** | |7.3 |17.0 +|[YOLOv5m][assets] |640 |44.5 |44.5 |63.1 |2.7 | |21.4 |51.3 +|[YOLOv5l][assets] |640 |48.2 |48.2 |66.9 |3.8 | |47.0 |115.4 +|[YOLOv5x][assets] |640 |**50.4** |**50.4** |**68.8** |6.1 | |87.7 |218.8 +| | | | | | | | | +|[YOLOv5s6][assets] |1280 |43.3 |43.3 |61.9 |**4.3** | |12.7 |17.4 +|[YOLOv5m6][assets] |1280 |50.5 |50.5 |68.7 |8.4 | |35.9 |52.4 +|[YOLOv5l6][assets] |1280 |53.4 |53.4 |71.1 |12.3 | |77.2 |117.7 +|[YOLOv5x6][assets] |1280 |**54.4** |**54.4** |**72.0** |22.4 | |141.8 |222.9 +| | | | | | | | | +|[YOLOv5x6][assets] TTA |1280 |**55.0** |**55.0** |**72.0** |70.8 | |- |- + +
+ Table Notes (click to expand) + + * APtest denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy. + * AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` + * SpeedGPU averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45` + * All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation). + * Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python test.py --data coco.yaml --img 1536 --iou 0.7 --augment` +

##
Getting Involved and Contributing
-Please make sure to read the [Contributing Guide](CONTRIBUTING.md) before making a pull request. - -**Thank you to all the people who already contributed to YOLOv5!** +We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started. -Issues should be raised in [GitHub Issues](https://github.com/ultralytics/yolov5/issues) provided yours does not already exist. ##
Get in Touch
-**For issues or trouble running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues) and create a new issue provided yours does not already exist.** -
-For business or professional support requests please visit: -[https://ultralytics.com/contact](https://ultralytics.com/contact) +- For issues running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). +- For business or professional support requests please visit +[https://ultralytics.com/contact](https://ultralytics.com/contact).
From 9ef5ee92290a5c34d93b76e23e655a3fdc37403f Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 14:26:21 +0200 Subject: [PATCH 15/34] rearrange quickstarts --- README.md | 60 +++++++++++++++++++++++++++---------------------------- 1 file changed, 30 insertions(+), 30 deletions(-) diff --git a/README.md b/README.md index 927990f2ef80..5f192eebff42 100755 --- a/README.md +++ b/README.md @@ -63,53 +63,54 @@ See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on tr These tutorials are intended to get you started using YOLOv5 quickly for demonstration purposes. Head to the [YOLOv5 Docs](https://docs.ultralytics.com) for more in-depth details. -
+
-Install Locally +Install ```bash $ git clone https://github.com/ultralytics/yolov5 $ pip install -r requirements.txt ``` -
-
-Inference Using Repository Clone - -_NOTE : In order to follow this tutorial please ensure you have installed YOLOv5 locally._ - -```bash -# Run inference based on selected input -$ python detect.py --source 0 # webcam - file.jpg # image - file.mp4 # video - path/ # directory - path/*.jpg # glob - 'https://youtu.be/NUsoVlDFqZg' # YouTube video - 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream -``` -
-Inference Using PyTorch Hub +Inference This tutorial will automatically download YOLOv5 models before running inference on the supplied image. ```python import torch -# Load a model -model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x +# Model +model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5x, custom -# Define images -img = 'https://ultralytics.com/images/zidane.jpg' +# Images +img = 'https://ultralytics.com/images/zidane.jpg' # or file, PIL, OpenCV, numpy, multiple -# Run inference +# Inference results = model(img) -# Handle results -results.print() # or .show(), .save(), .pandas().xyz() +# Results +results.print() # or .show(), .save(), .crop(), .pandas(), etc. +``` + +
+ + + +
+Inference with detect.py + +`detect.py` runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`. +```bash +$ python detect.py --source 0 # webcam + file.jpg # image + file.mp4 # video + path/ # directory + path/*.jpg # glob + 'https://youtu.be/NUsoVlDFqZg' # YouTube video + 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream ```
@@ -117,15 +118,14 @@ results.print() # or .show(), .save(), .pandas().xyz()
Training -_NOTE : In order to follow this tutorial please ensure you have installed YOLOv5 locally._ - +Run commands below to reproduce results on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices). ```bash $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64 yolov5m 40 yolov5l 24 yolov5x 16 - ``` +
From 724f82776826fe08afa78284900d6932983de0b5 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 14:30:14 +0200 Subject: [PATCH 16/34] API cleanup --- README.md | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 5f192eebff42..b3dcff6d25c4 100755 --- a/README.md +++ b/README.md @@ -43,22 +43,18 @@

YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development. YOLOv5 is current under active development, and all code, models, and documentation are subject to modification or deletion without notice. Use at your own risk.

-
- -###
Try the API
-
Instantly run YOLOv5 models using our JSON API.
https://ultralytics.com/yolov5
- +
##
Documentation
See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment. -##
Quick Start Tutorials
+##
Quick Start Examples
These tutorials are intended to get you started using YOLOv5 quickly for demonstration purposes. Head to the [YOLOv5 Docs](https://docs.ultralytics.com) for more in-depth details. @@ -77,7 +73,7 @@ $ pip install -r requirements.txt
Inference -This tutorial will automatically download YOLOv5 models before running inference on the supplied image. +Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36): ```python import torch From f6a71441bdbceac9aca1ec3cfe327c241840539f Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 14:35:26 +0200 Subject: [PATCH 17/34] PyTorch Hub cleanup --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index b3dcff6d25c4..73fd6b3f38f1 100755 --- a/README.md +++ b/README.md @@ -56,14 +56,14 @@ See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on tr ##
Quick Start Examples
-These tutorials are intended to get you started using YOLOv5 quickly for demonstration purposes. -Head to the [YOLOv5 Docs](https://docs.ultralytics.com) for more in-depth details.
Install +Python >= 3.6.0 required with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) dependencies installed: + ```bash $ git clone https://github.com/ultralytics/yolov5 $ pip install -r requirements.txt @@ -73,7 +73,7 @@ $ pip install -r requirements.txt
Inference -Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36): +Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). ```python import torch From e63b574aa7352b17b7f54a1c12dbdbb88cbdbff5 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 14:41:32 +0200 Subject: [PATCH 18/34] Add tutorials --- README.md | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 73fd6b3f38f1..05d709124813 100755 --- a/README.md +++ b/README.md @@ -125,6 +125,27 @@ $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size
+
+Tutorials + +## Tutorials + +* [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)  🚀 RECOMMENDED +* [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)  ☘️ RECOMMENDED +* [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)  🌟 NEW +* [Supervisely Ecosystem](https://github.com/ultralytics/yolov5/issues/2518)  🌟 NEW +* [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475) +* [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)  ⭐ NEW +* [TorchScript, ONNX, CoreML Export](https://github.com/ultralytics/yolov5/issues/251) 🚀 +* [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303) +* [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318) +* [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304) +* [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607) +* [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314)  ⭐ NEW +* [TensorRT Deployment](https://github.com/wang-xinyu/tensorrtx) + +
+ ##
Environments and Integrations
Get started with YOLOv5 in less than a few minutes using our integrations. @@ -213,15 +234,14 @@ We are super excited to announce our first-ever Ultralytics YOLOv5 🚀 EXPORT C
-##
Getting Involved and Contributing
+##
Contribute
We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started. ##
Get in Touch
-- For issues running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). -- For business or professional support requests please visit +For issues running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business or professional support requests please visit [https://ultralytics.com/contact](https://ultralytics.com/contact).
From ec6ddbee25f00d1fdb59b8b6aefda7cd5056c185 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 14:48:54 +0200 Subject: [PATCH 19/34] cleanup --- README.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index 05d709124813..7f0c327f502c 100755 --- a/README.md +++ b/README.md @@ -41,7 +41,7 @@

-YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development. YOLOv5 is current under active development, and all code, models, and documentation are subject to modification or deletion without notice. Use at your own risk. +YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.

@@ -148,7 +148,7 @@ $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size ##
Environments and Integrations
-Get started with YOLOv5 in less than a few minutes using our integrations. +Get started with YOLOv5 in seconds with our verified environments and integrations.
@@ -173,14 +173,14 @@ Get started with YOLOv5 in less than a few minutes using our integrations. -Add these your toolkit to ensure you get the most out of your training experience: +Add these to your toolkit to ensure you get the most out of your training experience: -* [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) - Build better models faster with experiment tracking, dataset versioning, and model management. -* [Supervisely](https://app.supervise.ly/signup) - Training and deployment platform for YOLOv5 models. +* [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) - Experiment tracking, dataset versioning, and model management integration. +* [Supervisely](https://app.supervise.ly/signup) - YOLOv5 training and deployment integration. -##
Contribute and Win
+##
Compete and Win
-We are super excited to announce our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes! +We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes!
@@ -239,7 +239,7 @@ We are super excited to announce our first-ever Ultralytics YOLOv5 🚀 EXPORT C We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started. -##
Get in Touch
+##
Contact
For issues running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business or professional support requests please visit [https://ultralytics.com/contact](https://ultralytics.com/contact). From 0f27fbe9d71b6f94675e7ee6f6eed2d5e5e94181 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 15:05:22 +0200 Subject: [PATCH 20/34] update CONTRIBUTING.md --- CONTRIBUTING.md | 65 +++++++++++++++++++++++-------------------------- 1 file changed, 30 insertions(+), 35 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 4ca3500da4a5..acf74448c1fd 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,54 +1,49 @@ -# Contributing to Transcriptase -We love your input! We want to make contributing to this project as easy and transparent as possible, whether it's: +## Contributing to YOLOv5 🚀 + +We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible, whether it's: - Reporting a bug - Discussing the current state of the code - Submitting a fix -- Proposing new features +- Proposing a new feature - Becoming a maintainer -## We Develop with Github -We use github to host code, to track issues and feature requests, as well as accept pull requests. +YOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be helping push the frontiers of what's possible in AI 😃! + + +## Submitting a Pull Request (PR) 🛠️ -## We Use [Github Flow](https://guides.github.com/introduction/flow/index.html), So All Code Changes Happen Through Pull Requests -Pull requests are the best way to propose changes to the codebase (we use [Github Flow](https://guides.github.com/introduction/flow/index.html)). We actively welcome your pull requests: +To allow your work to be integrated as seamlessly as possible, we advise you to: +- ✅ Verify your PR is **up-to-date with origin/master.** If your PR is behind origin/master an automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch: +```bash +git remote add upstream https://github.com/ultralytics/yolov5.git +git fetch upstream +git checkout feature # <----- replace 'feature' with local branch name +git merge upstream/master +git push -u origin -f +``` +- ✅ Verify all Continuous Integration (CI) **checks are passing**. +- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ -Bruce Lee -1. Fork the repo and create your branch from `master`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. Issue that pull request! -## Any contributions you make will be under the MIT Software License -In short, when you submit code changes, your submissions are understood to be under the same [MIT License](http://choosealicense.com/licenses/mit/) that covers the project. Feel free to contact the maintainers if that's a concern. +## Submitting a Bug Report 🐛 -## Report bugs using Github's [issues](https://github.com/briandk/transcriptase-atom/issues) -We use GitHub issues to track public bugs. Report a bug by [opening a new issue](); it's that easy! +For us to investigate an issue we would need to be able to reproduce it ourselves first. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem. -## Write bug reports with detail, background, and sample code -[This is an example](http://stackoverflow.com/q/12488905/180626) of a bug report I wrote, and I think it's not a bad model. Here's [another example from Craig Hockenberry](http://www.openradar.me/11905408), an app developer whom I greatly respect. +When asking a question, people will be better able to provide help if you provide **code** that they can easily understand and use to **reproduce** the problem. This is referred to by community members as creating a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces the problem should be: -**Great Bug Reports** tend to have: +* ✅ **Minimal** – Use as little code as possible that still produces the same problem +* ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself +* ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem -- A quick summary and/or background -- Steps to reproduce - - Be specific! - - Give sample code if you can. [My stackoverflow question](http://stackoverflow.com/q/12488905/180626) includes sample code that *anyone* with a base R setup can run to reproduce what I was seeing -- What you expected would happen -- What actually happens -- Notes (possibly including why you think this might be happening, or stuff you tried that didn't work) +In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code should be: -People *love* thorough bug reports. I'm not even kidding. +* ✅ **Current** – Verify that your code is up-to-date with current GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new copy to ensure your problem has not already been resolved by previous commits. +* ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️. -## Use a Consistent Coding Style -I'm again borrowing these from [Facebook's Guidelines](https://github.com/facebook/draft-js/blob/a9316a723f9e918afde44dea68b5f9f39b7d9b00/CONTRIBUTING.md) +If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better understand and diagnose your problem. -* 2 spaces for indentation rather than tabs -* You can try running `npm run lint` for style unification ## License -By contributing, you agree that your contributions will be licensed under its MIT License. -## References -This document was adapted from the open-source contribution guidelines for [Facebook's Draft](https://github.com/facebook/draft-js/blob/a9316a723f9e918afde44dea68b5f9f39b7d9b00/CONTRIBUTING.md) +By contributing, you agree that your contributions will be licensed under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/) From a270f7d497335dff1a74640de67215d38385e140 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 15:10:18 +0200 Subject: [PATCH 21/34] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 7f0c327f502c..2431bee6b4bb 100755 --- a/README.md +++ b/README.md @@ -45,7 +45,7 @@ YOLOv5 🚀 is a family of object detection architectures and models pretrained

- +
From 3cfa58f9fa94e7643d1d305c3ccd2f192875f2dd Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 15:20:33 +0200 Subject: [PATCH 22/34] update wandb link --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 2431bee6b4bb..fc4fe861cafc 100755 --- a/README.md +++ b/README.md @@ -175,7 +175,7 @@ Get started with YOLOv5 in seconds with our verified environments and integratio Add these to your toolkit to ensure you get the most out of your training experience: -* [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) - Experiment tracking, dataset versioning, and model management integration. +* [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) - Experiment tracking, dataset versioning, and model management integration. * [Supervisely](https://app.supervise.ly/signup) - YOLOv5 training and deployment integration. ##
Compete and Win
From 8ee146155bc166eb7b8e7c65394a499afa411d35 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 15:20:55 +0200 Subject: [PATCH 23/34] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index fc4fe861cafc..bc2a292b1f43 100755 --- a/README.md +++ b/README.md @@ -166,7 +166,7 @@ Get started with YOLOv5 in seconds with our verified environments and integratio - +
From 2cb66351b8ca7b1e517ec26ab96a67ed63408904 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 15:23:38 +0200 Subject: [PATCH 24/34] remove tutorials header --- README.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/README.md b/README.md index bc2a292b1f43..b468ebbe3661 100755 --- a/README.md +++ b/README.md @@ -128,8 +128,6 @@ $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size
Tutorials -## Tutorials - * [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)  🚀 RECOMMENDED * [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)  ☘️ RECOMMENDED * [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)  🌟 NEW From 32055e75eaf7fa526cacb434f95343acd650015a Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 15:29:09 +0200 Subject: [PATCH 25/34] update environments and integrations --- README.md | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/README.md b/README.md index b468ebbe3661..ca3be960cc97 100755 --- a/README.md +++ b/README.md @@ -146,7 +146,7 @@ $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size ##
Environments and Integrations
-Get started with YOLOv5 in seconds with our verified environments and integrations. +Get started in seconds with our verified environments and integrations, including [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) for automatic YOLOv5 experiment logging. Click on each icon below for details. - -Add these to your toolkit to ensure you get the most out of your training experience: - -* [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) - Experiment tracking, dataset versioning, and model management integration. -* [Supervisely](https://app.supervise.ly/signup) - YOLOv5 training and deployment integration. - ##
Compete and Win
We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes! From dc323b5e785978d8c050f6dacace4d44129296b5 Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 15:35:37 +0200 Subject: [PATCH 26/34] Comment API image --- README.md | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index ca3be960cc97..cde624dfd182 100755 --- a/README.md +++ b/README.md @@ -15,27 +15,27 @@
@@ -44,8 +44,10 @@ YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.

+
From 42f898d5dc5926e8a1ea1058616ed111068b392c Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Sat, 12 Jun 2021 15:41:00 +0200 Subject: [PATCH 27/34] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index cde624dfd182..e8f60e61ab4d 100755 --- a/README.md +++ b/README.md @@ -41,7 +41,7 @@

-YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development. +YOLOv5 🚀 is a family of object detection models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.