Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while running while training with the command train.py #3710

Closed
jaskiratsingh2000 opened this issue Jun 21, 2021 · 19 comments
Closed

Error while running while training with the command train.py #3710

jaskiratsingh2000 opened this issue Jun 21, 2021 · 19 comments
Labels
question Further information is requested Stale

Comments

@jaskiratsingh2000
Copy link

jaskiratsingh2000 commented Jun 21, 2021

Question:

Hi @glenn-jocher I am trying to train the YOLOv5 with my own custom datasets by the following datasets and getting this following error. Can you please help me rectify these errors?

Comand Ran:

python3 train.py --img 416 --batch 80 --epochs 100 --data './data.yaml' --cfg ./models/yolov5s.yaml --weights 'yolov5s.pt'

Error:

train: weights=yolov5s.pt, cfg=./models/yolov5s.yaml, data=./data.yaml, hyp=data/hyp.scratch.yaml, epochs=100, batch_size=80, img_size=[416], rect=False, resume=False, nosave=False, notest=False, noautoanchor=False, evolve=False, bucket=, cache_images=False, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, entity=None, name=exp, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1
github: up to date with https://github.com/ultralytics/yolov5 ✅
YOLOv5 🚀 v5.0-220-ge8810a5 torch 1.7.0a0+e85d494 CPU

hyperparameters: lr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0
tensorboard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/
wandb: Install Weights & Biases for YOLOv5 logging with 'pip install wandb' (recommended)

                 from  n    params  module                                  arguments                     
  0                -1  1       880  models.common.Focus                     [3, 8, 3]                     
  1                -1  1       592  models.common.Conv                      [8, 8, 3, 2]                  
  2                -1  1       336  models.common.C3                        [8, 8, 1]                     
  3                -1  1       592  models.common.Conv                      [8, 8, 3, 2]                  
  4                -1  1       336  models.common.C3                        [8, 8, 1]                     
  5                -1  1      1184  models.common.Conv                      [8, 16, 3, 2]                 
  6                -1  1      1248  models.common.C3                        [16, 16, 1]                   
  7                -1  1      3504  models.common.Conv                      [16, 24, 3, 2]                
  8                -1  1      1512  models.common.SPP                       [24, 24, [5, 9, 13]]          
  9                -1  1      2736  models.common.C3                        [24, 24, 1, False]            
 10                -1  1       416  models.common.Conv                      [24, 16, 1, 1]                
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          
 12           [-1, 6]  1         0  models.common.Concat                    [1]                           
 13                -1  1      1504  models.common.C3                        [32, 16, 1, False]            
 14                -1  1       144  models.common.Conv                      [16, 8, 1, 1]                 
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          
 16           [-1, 4]  1         0  models.common.Concat                    [1]                           
 17                -1  1       400  models.common.C3                        [16, 8, 1, False]             
 18                -1  1       592  models.common.Conv                      [8, 8, 3, 2]                  
 19          [-1, 14]  1         0  models.common.Concat                    [1]                           
 20                -1  1      1248  models.common.C3                        [16, 16, 1, False]            
 21                -1  1      2336  models.common.Conv                      [16, 16, 3, 2]                
 22          [-1, 10]  1         0  models.common.Concat                    [1]                           
 23                -1  1      2928  models.common.C3                        [32, 24, 1, False]            
 24      [17, 20, 23]  1      1071  models.yolo.Detect                      [2, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [8, 16, 24]]
Model Summary: 247 layers, 23559 parameters, 23559 gradients

Transferred 51/314 items from yolov5s.pt
Scaled weight_decay = 0.000625
Optimizer groups: 54 .bias, 54 conv.weight, 51 other
val: Scanning 'valid/labels.cache' images and labels... 29 found,
val: Scanning 'valid/labels.cache' images and labels... 29 found,
Plotting labels... 
train: Scanning 'train/labels.cache' images and labels... 315 fou

autoanchor: Analyzing anchors... anchors/target = 5.44, Best Possible Recall (BPR) = 0.9984
Image sizes 416 train, 416 test
Using 4 dataloader workers
Logging results to runs/train/exp3
Starting training for 100 epochs...

     Epoch   gpu_mem       box       obj       cls     total    labels  img_size
      0/99        0G    0.1086   0.04881   0.02939    0.1868     
Traceback (most recent call last):
  File "train.py", line 647, in <module>
    main(opt)
  File "train.py", line 548, in main
    train(opt.hyp, opt, device)
  File "train.py", line 359, in train
    loggers['tb'].add_graph(torch.jit.trace(de_parallel(model), imgs[0:1], strict=False), [])
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 733, in trace
    return trace_module(
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 934, in trace_module
    module._c._create_method_from_trace(
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 725, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 709, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/Desktop/yolov5/models/yolo.py", line 123, in forward
    return self.forward_once(x, profile)  # single-scale inference, train
  File "/home/ubuntu/Desktop/yolov5/models/yolo.py", line 154, in forward_once
    x = m(x)  # run
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 725, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 709, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/Desktop/yolov5/models/common.py", line 171, in forward
    return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 725, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 709, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/Desktop/yolov5/models/common.py", line 42, in forward
    return self.act(self.bn(self.conv(x)))
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 725, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 709, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 423, in forward
    return self._conv_forward(input, self.weight)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 419, in _conv_forward
    return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient
Tensor:
(1,1,.,.) = 
 0.01 *
  4.9591 -4.2480 -1.8661
   4.5166 -9.0576  5.7709
  -1.9791  4.8950  1.3374

(2,1,.,.) = 
 0.01 *
  6.5979  1.9943  3.0945
   7.1899  9.1248 -6.3843
   1.2032  7.1838  6.9702

(3,1,.,.) = 
 0.01 *
  7.1106 -6.9458 -3.5675
   8.4839 -7.3303  8.7280
  -7.5684 -6.7810  4.7028

(4,1,.,.) = 
 0.01 *
  4.0161 -3.2776 -5.4047
   2.1835 -6.9580 -2.0538
   4.9164  3.5950  8.1909

(5,1,.,.) = 
 0.01 *
  0.1991  2.7634  3.4698
  -7.7515  4.6478  0.9895
   6.6345  6.7993 -7.5806

(6,1,.,.) = 
 0.01 *
  5.3650 -7.5623 -4.8065
  -3.2745 -5.5542  4.4922
  -6.8665  8.9417 -3.9764

(7,1,.,.) = 
 0.01 *
 -8.4778 -3.1647 -6.7749
   3.4607 -8.0078 -4.7424
  -6.1127 -5.9143 -4.4922

(8,1,.,.) = 
 0.01 *
  7.3364  6.1401 -4.0222
  -1.9058 -7.1716 -5.4840
   7.3853 -7.8979 -2.3773

(1,2,.,.) = 
 0.01 *
 -1.1780  2.6688  0.4745
   3.5156 -3.7506 -0.7015
  -0.8659  1.3947 -0.0384

(2,2,.,.) = 
 0.01 *
  5.9784 -6.9641 -6.9275
  -5.8197  1.2085  9.5886
  -6.0791  5.1270 -5.3253

(3,2,.,.) = 
 0.01 *
 -6.9153 -2.2049  7.0007
   7.6233  9.1003 -1.9531
  -7.4768  9.4727 -2.0493

(4,2,.,.) = 
 0.01 *
 -6.5613  0.0837 -2.8412
  -5.3833 -5.4138  2.4246
   1.0071 -4.9103 -6.7810

(5,2,.,.) = 
 0.01 *
  9.2407 -9.4604  5.5328
   0.6771  4.5532 -5.2032
   5.7861 -4.7607 -8.5083

(6,2,.,.) = 
 0.01 *
  5.6793  0.3277 -4.2328
   6.4270 -7.3425 -5.0903
   1.1536  7.6355 -4.1229

(7,2,.,.) = 
 0.01 *
  9.4604 -0.7858  2.9327
  -9.5520  8.5571 -2.0950
  -8.8318  6.6284 -2.6016

(8,2,.,.) = 
 0.01 *
  2.0432 -5.7739  2.2430
  -1.5472  3.8483 -7.3853
   1.3283 -8.7708  5.9692

(1,3,.,.) = 
 0.01 *
  8.4106  2.9938 -3.5828
  -5.8105 -1.6129 -4.1504
  -3.0838  0.4608  5.7373

(2,3,.,.) = 
 0.01 *
 -9.0454 -2.0447  5.5450
   8.9355 -5.9753  2.0889
   8.3008  6.3782  5.9967

(3,3,.,.) = 
 0.01 *
 -3.9581  2.3453 -6.7322
   6.3232  6.0303 -7.6355
  -7.9041 -0.8430  4.0405

(4,3,.,.) = 
 0.01 *
 -7.9712  9.4299 -1.6403
  -2.2247 -3.5370 -4.8126
  -8.8013 -5.6488  5.8807

(5,3,.,.) = 
 0.01 *
  3.2227  5.2673  9.5398
  -1.0056  9.2590  6.1829
  -4.7729 -7.4219  5.3955

(6,3,.,.) = 
 0.01 *
 -5.8594 -6.1432 -4.2419
  -3.3234 -2.2415 -5.4749
   3.0075  0.0782 -6.2866

(7,3,.,.) = 
 0.01 *
  8.7402 -6.5369 -8.9355
  -7.2266  4.1687 -1.0788
  -8.9172  1.7838 -0.6134

(8,3,.,.) = 
 0.01 *
  9.3445 -3.4332  1.4229
  -4.9591 -6.1554 -4.8737
  -4.5837  1.8265 -1.1032

(1,4,.,.) = 
 0.01 *
  5.2307 -9.4055  5.9662
   2.6886  9.1248  6.3538
  -8.7646 -9.1492 -4.6417

(2,4,.,.) = 
 0.01 *
  6.8359  6.0852  2.4857
  -6.5796 -8.0811 -4.4098
  -1.1208 -5.8990  3.5217

(3,4,.,.) = 
 0.01 *
 -0.2787 -4.8798  0.2199
  -9.0454 -6.7993 -6.4026
   7.9224  8.4839 -3.2684

(4,4,.,.) = 
 0.01 *
  2.1088 -3.3936 -3.6377
   6.1462  6.9336  9.2712
   2.8076  4.6478 -0.7732

(5,4,.,.) = 
 0.01 *
 -5.2734  4.4128 -8.2397
   7.0190  2.8839 -1.1063
   4.4128 -6.8481 -2.1042

(6,4,.,.) = 
 0.01 *
 -5.4962  2.0370 -0.1374
   6.8115 -1.4587 -7.7942
  -7.1167 -2.8244 -2.1317

(7,4,.,.) = 
 0.01 *
  3.4760 -4.8950 -6.8665
   6.5613 -4.6295  6.7688
   7.8003 -6.7322 -4.4373

(8,4,.,.) = 
 0.01 *
  0.6748 -7.3730 -6.0455
  -8.3313 -6.9153  8.0566
   3.5736 -9.1797 -1.6068

(1,5,.,.) = 
 0.01 *
  8.4473 -1.6022  4.1168
  -4.4708  9.4421 -4.0710
   7.2144  0.1140 -5.0690

(2,5,.,.) = 
 0.01 *
  2.9785 -2.1790  3.6987
   3.1097  5.8746  6.4819
  -3.2593  9.3994 -1.1124

(3,5,.,.) = 
 0.01 *
  0.4314  4.2908  1.1528
  -4.8187  5.5511  5.9143
  -0.5577 -1.1848  8.7463

(4,5,.,.) = 
 0.01 *
 -5.6244 -9.4299  5.8624
  -1.3985  3.9520  0.4738
  -8.9783 -4.7211  2.7069

(5,5,.,.) = 
 0.01 *
  0.0943 -5.5603 -2.4582
  -4.1504  9.1736  8.1726
   8.3008  7.8674 -9.2163

(6,5,.,.) = 
 0.01 *
  1.0979 -2.1576  3.5492
  -3.2013  7.0618  2.2095
  -2.9587  8.5754  2.4567

(7,5,.,.) = 
 0.01 *
  1.1581 -9.0698  2.9556
  -4.0100 -4.2664 -7.4402
   4.5013 -6.1310 -7.6904

(8,5,.,.) = 
 0.01 *
  0.7721  5.1392  8.9233
   2.3239  6.7810 -3.4241
  -0.3143 -5.8075  2.2171

(1,6,.,.) = 
 0.01 *
  4.9469 -5.1086  2.8305
  -2.7786 -1.0551 -9.2529
  -4.5868  5.2216 -2.3392

(2,6,.,.) = 
 0.01 *
 -0.3305 -9.0820 -6.1920
  -5.6213 -4.1168  6.8420
  -3.1433 -7.1899  3.7018

(3,6,.,.) = 
 0.01 *
  8.4106 -5.4565  9.4116
   2.3819 -6.3904  5.2673
  -7.1838  8.8928 -6.1859

(4,6,.,.) = 
 0.01 *
 -1.9470 -9.2896  3.3752
   0.3691 -3.0930  3.8879
   2.0309 -3.5492  8.6426

(5,6,.,.) = 
 0.01 *
  3.2196  3.5461 -7.0557
   3.9001 -5.2216  1.5305
   3.1860  8.3008 -5.0476

(6,6,.,.) = 
 0.01 *
 -6.5369 -5.3558  6.0333
   4.3945  7.5134 -0.7393
   6.7322  4.7852  2.9755

(7,6,.,.) = 
 0.01 *
 -2.5299 -5.3558 -9.3567
   3.8788 -3.6621  6.7383
   6.7749 -6.1371 -5.6091

(8,6,.,.) = 
 0.01 *
  8.8806  6.0425  9.5764
  -4.3365 -1.2039  1.9852
   7.7393 -8.8440 -4.7272

(1,7,.,.) = 
 0.01 *
  9.5825  7.7148 -0.4505
  -6.4209  5.8594  2.9861
  -6.2195  6.2500  5.8411

(2,7,.,.) = 
 0.01 *
  3.0807  6.2317 -4.9805
   2.0859 -3.5034 -2.1622
  -7.6660 -4.3854 -2.9465

(3,7,.,.) = 
 0.01 *
  2.7222  2.9297  2.2888
   7.9834 -3.9978 -4.0619
  -8.3374 -0.3948 -4.5593

(4,7,.,.) = 
 0.01 *
  4.8279  1.2085 -1.8478
  -8.1482  7.4280 -2.4826
  -6.2195  6.1707  3.4180

(5,7,.,.) = 
 0.01 *
 -7.5684  8.9050  4.0405
   4.6783  3.4637 -3.4058
   5.7892 -8.3984  1.3390

(6,7,.,.) = 
 0.01 *
 -2.2278  9.2773  1.9470
  -2.4826 -0.1369  9.4604
   6.4636 -0.7130  9.4360

(7,7,.,.) = 
 0.01 *
 -3.4149  5.6366 -0.3313
  -3.4363  5.9601  9.1248
   7.1167 -7.3608 -1.9989

(8,7,.,.) = 
 0.01 *
  1.3962 -5.5634  3.5950
  -0.6561 -2.4933  5.1605
   6.8787  9.3567  4.3579

(1,8,.,.) = 
 0.01 *

@glenn-jocher please please let me know. Your response would be of great help. Looking forward to hearing your response. Thanks and Regards

@jaskiratsingh2000 jaskiratsingh2000 added the question Further information is requested label Jun 21, 2021
@jaskiratsingh2000
Copy link
Author

@glenn-jocher For your Information, I am using this on my local machine "Ubuntu". So, can you please let me know exactly that what all changes I have to make in order to rectify these errors?

@jaskiratsingh2000
Copy link
Author

@glenn-jocher Can you p[lease look into it. This seems like a bug in my terms. can you please check once and try running this locally instead of Google Collab

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 21, 2021

@jaskiratsingh2000 your error is not reproducible:
Screenshot 2021-06-21 at 14 12 21

We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

  • Minimal – Use as little code as possible that still produces the same problem
  • Complete – Provide all parts someone else needs to reproduce your problem in the question itself
  • Reproducible – Test the code you're about to provide to make sure it reproduces the problem

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

  • Current – Verify that your code is up-to-date with current GitHub master, and if necessary git pull or git clone a new copy to ensure your problem has not already been resolved by previous commits.
  • Unmodified – Your problem must be reproducible without any modifications to the codebase in this repository. Ultralytics does not provide support for custom code ⚠️.

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

@jaskiratsingh2000
Copy link
Author

@glenn-jocher Can you please test this in your local machine once?

Or if you have any idea about this issue because this issue is with the code of yolov5 only so can you let me know please?

@jaskiratsingh2000
Copy link
Author

@glenn-jocher even this issue was referred here as well #3284 (comment)

So please please let me know

@jaskiratsingh2000
Copy link
Author

@glenn-jocher Now running this same command the error I am getting is following:


     Epoch   gpu_mem       box       obj       cls     total    labels  img_size
      0/99        0G    0.1065   0.05497   0.02875    0.1903     
               Class     Images     Labels          P          R 
Traceback (most recent call last):
  File "train.py", line 647, in <module>
    main(opt)
  File "train.py", line 548, in main
    train(opt.hyp, opt, device)
  File "train.py", line 377, in train
    results, maps, _ = test.test(data_dict,
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/home/ubuntu/Desktop/yolov5/test.py", line 231, in test
    nt = np.bincount(stats[3].astype(np.int64), minlength=nc)  # number of targets per class
  File "<__array_function__ internals>", line 5, in bincount
TypeError: Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe'

@glenn-jocher please please help me now. 🙏

@jaskiratsingh2000
Copy link
Author

@glenn-jocher If you are around please please let me know. Can you let me know?

@jaskiratsingh2000
Copy link
Author

Can this be run directly on test.py as well @glenn-jocher ?

@glenn-jocher
Copy link
Member

@jaskiratsingh2000 yes test.py is used to evaluate model accuracy. See test.py for usage example:

yolov5/test.py

Lines 1 to 6 in b83e1a4

"""Test a trained YOLOv5 model accuracy on a custom dataset
Usage:
$ python path/to/test.py --data coco128.yaml --weights yolov5s.pt --img 640
"""

@jaskiratsingh2000
Copy link
Author

@glenn-jocher But how can I get accuracy for different configurations?
Suppose if I want to compute the accuracy with default parameters and changed parameters how can I do that in this command?

@glenn-jocher
Copy link
Member

@jaskiratsingh2000 you can pass any arguments you want to train.py. See the file for a full list of arguments.

@jaskiratsingh2000
Copy link
Author

jaskiratsingh2000 commented Jun 22, 2021

Okay, so @glenn-jocher Let me get more clear about what I want to do. I think you are not able to understand me.

As you could see in the yolov5s.yaml file, there are two parameters named "depth_multiple = 0.33" and "width_multiple = 0.50" as shown below:

depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple

So I want to change the values of these parameters as "depth_multiple = 0.01" and "width_multiple = 0.02" and then compute the performance metric.
Now thing is that in order to compute the performance metric I have to run the python3 test.py --data coco.yaml --weights yolov5s.pt and this will just give me the default performance metric with no changed value and as part of argument, I cannot pass the --cfg models/yolov5s for the changed parameter values. So this is what I have been trying to ask you for long that how can I get the performance metric after changing the values in the configuration file.

Can I do this directly within the test.py file? If not then what exact steps do I have to follow in order to do that @glenn-jocher

Your response would be highly appreciable.
Thanks and Regards

@glenn-jocher
Copy link
Member

@jaskiratsingh2000 the correct workflow is train > test > export. You must train a model before you test it naturally.

@jaskiratsingh2000
Copy link
Author

okay, let me try @glenn-jocher and how to export it?

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 22, 2021

@jaskiratsingh2000
Copy link
Author

Okay, Thank you very much @glenn-jocher If I'll have questions I'll drop them in this issue.

@jaskiratsingh2000
Copy link
Author

@glenn-jocher Can you please refer to this issue as yolov5 doesn't work for me so I had to move to the yolov3 ultralytics/yolov3#1795

@jaskiratsingh2000
Copy link
Author

@glenn-jocher Can you please answer this? Really need your help here

@github-actions
Copy link
Contributor

github-actions bot commented Jul 23, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants