-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
xView.yaml label category mapping with YOLO has problem with id=75 #5469
Comments
👋 Hello @geobao, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com. RequirementsPython>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started: $ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
@geobao hi, thank you for your suggestion on how to improve YOLOv5 🚀! The fastest and easiest way to incorporate your ideas into the official codebase is to submit a Pull Request (PR) implementing your idea, and if applicable providing before and after profiling/inference/training results to help us understand the improvement your feature provides. This allows us to directly see the changes in the code and to understand how they affect workflows and performance. Please see our ✅ Contributing Guide to get started. About this specific topic, training works correctly for us on xView, we are not able to reproduce any issues with corrupted images: python train.py --data xView.yaml
train: weights=yolov5s.pt, cfg=, data=xView.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=300, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, name=exp, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, patience=100, freeze=0, save_period=-1, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest
github: up to date with https://github.com/ultralytics/yolov5 ✅
YOLOv5 🚀 v6.0-45-g042f02f torch 1.10.0 CPU
hyperparameters: lr0=0.01, lrf=0.1, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Weights & Biases: run 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs (RECOMMENDED)
TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=60
from n params module arguments
0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 2 115712 models.common.C3 [128, 128, 2]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 3 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 1182720 models.common.C3 [512, 512, 1]
9 -1 1 656896 models.common.SPPF [512, 512, 5]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 175305 models.yolo.Detect [60, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model Summary: 270 layers, 7181449 parameters, 7181449 gradients, 16.4 GFLOPs
Transferred 343/349 items from yolov5s.pt
Scaled weight_decay = 0.0005
optimizer: SGD with parameter groups 57 weight, 60 weight (no decay), 60 bias
train: Scanning '../datasets/xView/images/autosplit_train' images and labels...760 found, 0 missing, 0 empty, 0 corrupted: 100%|█| 760/760 [00:05<00:00, 139.20it/
train: New cache created: ../datasets/xView/images/autosplit_train.cache
val: Scanning '../datasets/xView/images/autosplit_val' images and labels...86 found, 0 missing, 0 empty, 0 corrupted: 100%|███████| 86/86 [00:06<00:00, 13.73it/s]
val: New cache created: ../datasets/xView/images/autosplit_val.cache
Plotting labels...
etc... |
@geobao also if I examine our indexing list I see 61 unique values includes -1, which means the indexing list is already incorporating 60 unique classes. import numpy as np
np.unique(xview_class2index)
Out[6]:
array([-1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
50, 51, 52, 53, 54, 55, 56, 57, 58, 59])
len(np.unique(xview_class2index))
Out[7]: 61 |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
@glenn-jocher How can I create the autosplit_train.txt and autosplit_val.txt files for xView? I cannot find them anywhere. |
@hadi-ghnd you don't need to call autosplit(), this is done automatically when you first train on xView after downloading the data. Directions are in the yaml: Lines 1 to 9 in 19e0208
|
@glenn-jocher ok, thank you.
Then I tried training with
I thought maybe I should create the |
@glenn-jocher thank you. I updated the python version and disabled |
@hadi-ghnd there is no pretrained xView model. Train batch images are generated automatically, i.e. run/train/exp/train_batch*.jpg |
@glenn-jocher thanks a lot. I see them. |
hi @glenn-jocher @hadi-ghnd thanks for your disscussion on the xView dataset issue, I encountered a similar problem, I have followed the instruction to download the dataset as :
but when I run the train.py with
|
@twangnh good news 😃! Your original issue may now be fixed ✅ in PR #9807. To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
Search before asking
YOLOv5 Component
Other
Bug
There is a yaml that is specific for the dataset of xView. (xView.yaml).
In this file there is a python script to convert the original labels to YOLO-suitable labels.
The original labels of xView contain ids contained 11-94. There are a total of 60 classes. Therefore in YOLO we want ids from 0 to 59 without gaps. There is a list that maps ids from the original id_classes of the dataset to the new ids that will be used in YOLO. However, there exists an id=75 in the original dataset. This id is not mapped to a correct YOLO id but it is assigned a -1. This will lead to mark all the images with this class as corrupted and they will be ignored in the training.
The mapping occurs in this line of code:
yolov5/data/xView.yaml
Line 59 in 24bea5e
Environment
No response
Minimal Reproducible Example
To explore the original class ids from the dataset, the original labels can be loaded in a Dataframe like that:
Download manually from https://challenge.xviewdataset.org
current link for training labels > here
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: