Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: x1 must be greater than or equal to x0, when use the val.py to val the onnx model #12473

Closed
1 task done
dengxiongshi opened this issue Dec 6, 2023 · 5 comments
Labels
question Further information is requested Stale

Comments

@dengxiongshi
Copy link

dengxiongshi commented Dec 6, 2023

Search before asking

Question

When I use the val.py to val the onnx model, I get the error:

(NN) D:\python_work\yolov5>python val.py --weights runs\train\WI_PRW_SSW_SSM_20231127\weights\best_train.onnx --device 0 --name train_mode
val: data=E:\downloads\compress\datasets\train_data\train_data.yaml, weights=['runs\\train\\WI_PRW_SSW_SSM_20231127\\weights\\best_train.onnx'], batch_size=16, imgsz=640, conf_thres=0.001, iou_thres=0.6, max_det=300, task=val, device=0, workers=0, single_cls=False, augment=False, verbose=False, save_txt=Fal
se, save_hybrid=False, save_conf=False, save_json=False, project=runs\val, name=train_mode, exist_ok=False, half=False, dnn=False
YOLOv5  v7.0-240-g84ec8b5 Python-3.8.18 torch-1.9.1+cu111 CUDA:0 (GeForce RTX 2060, 6144MiB)

Loading runs\train\WI_PRW_SSW_SSM_20231127\weights\best_train.onnx for ONNX Runtime inference...
Forcing --batch-size 1 square inference (1,3,640,640) for non-PyTorch models
val: Scanning E:\downloads\compress\datasets\train_data\labels\val.cache... 2575 images, 0 backgrounds, 0 corrupt: 100%|██████████| 2575/2575 [00:00<?, ?it/s]
                 Class     Images  Instances          P          R      mAP50   mAP50-95:   0%|          | 1/2575 [00:00<05:14,  8.18it/s]Exception in thread Thread-3:
Traceback (most recent call last):
  File "D:\Anaconda3\envs\NN\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "D:\Anaconda3\envs\NN\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "D:\python_work\yolov5\utils\plots.py", line 175, in plot_images
    annotator.box_label(box, label, color=color)
  File "D:\Anaconda3\envs\NN\lib\site-packages\ultralytics\utils\plotting.py", line 108, in box_label
    self.draw.rectangle(box, width=self.lw, outline=color)  # box
  File "D:\Anaconda3\envs\NN\lib\site-packages\PIL\ImageDraw.py", line 294, in rectangle
    self.run()
  File "D:\Anaconda3\envs\NN\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "D:\python_work\yolov5\utils\plots.py", line 175, in plot_images
    annotator.box_label(box, label, color=color)
  File "D:\Anaconda3\envs\NN\lib\site-packages\ultralytics\utils\plotting.py", line 108, in box_label
    self.draw.rectangle(box, width=self.lw, outline=color)  # box
  File "D:\Anaconda3\envs\NN\lib\site-packages\PIL\ImageDraw.py", line 294, in rectangle
    self.draw.draw_rectangle(xy, ink, 0, width)
ValueError: x1 must be greater than or equal to x0
                 Class     Images  Instances          P          R      mAP50   mAP50-95:   0%|          | 3/2575 [00:00<03:04, 13.96it/s]Exception in thread Thread-7:
Traceback (most recent call last):
  File "D:\Anaconda3\envs\NN\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "D:\Anaconda3\envs\NN\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "D:\python_work\yolov5\utils\plots.py", line 175, in plot_images
    annotator.box_label(box, label, color=color)
  File "D:\Anaconda3\envs\NN\lib\site-packages\ultralytics\utils\plotting.py", line 108, in box_label
    self.draw.rectangle(box, width=self.lw, outline=color)  # box
  File "D:\Anaconda3\envs\NN\lib\site-packages\PIL\ImageDraw.py", line 294, in rectangle
    self.draw.draw_rectangle(xy, ink, 0, width)
**ValueError: x1 must be greater than or equal to x0**
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 2575/2575 [01:23<00:00, 30.71it/s]
                   all       2575      30443          0          0          0          0
Speed: 0.4ms pre-process, 12.5ms inference, 0.8ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\val\train_mode2

Additional

The environment is Python 3.8 and windows10. The package is follow:

Package                   Version              Editable project location
------------------------- -------------------- -------------------------
about-time                4.2.1
absl-py                   2.0.0
aiohttp                   3.8.6
aiosignal                 1.3.1
alive-progress            3.1.5
antlr4-python3-runtime    4.9.3
anyio                     4.0.0
appdirs                   1.4.4
argon2-cffi               23.1.0
argon2-cffi-bindings      21.2.0
arrow                     1.3.0
astor                     0.8.1
asttokens                 2.4.1
async-lru                 2.0.4
async-timeout             4.0.3
attrs                     23.1.0
Babel                     2.13.0
backcall                  0.2.0
bce-python-sdk            0.8.90
beautifulsoup4            4.12.2
bleach                    6.1.0
blinker                   1.6.3
cachetools                5.3.1
certifi                   2023.7.22
cffi                      1.16.0
charset-normalizer        3.2.0
click                     8.1.7
colorama                  0.4.6
coloredlogs               15.0.1
comm                      0.2.0
contourpy                 1.1.1
cycler                    0.11.0
Cython                    3.0.5
dataclasses               0.6
debugpy                   1.8.0
decorator                 5.1.1
defusedxml                0.7.1
docker-pycreds            0.4.0
easygui                   0.98.3
exceptiongroup            1.1.3
executing                 2.0.1
fastjsonschema            2.18.1
filelock                  3.12.4
Flask                     3.0.0
flask-babel               4.0.0
flatbuffers               23.5.26
fonttools                 4.42.1
fqdn                      1.5.1
frozenlist                1.4.0
fsspec                    2023.10.0
future                    0.18.3
gitdb                     4.0.10
GitPython                 3.1.37
google-auth               2.23.2
google-auth-oauthlib      1.0.0
grapheme                  0.6.0
grpcio                    1.59.0
h11                       0.14.0
httpcore                  0.18.0
httpx                     0.25.0
huggingface-hub           0.19.0
humanfriendly             10.0
idna                      3.4
imageio                   2.33.0
importlib-metadata        6.8.0
importlib-resources       6.0.1
ipykernel                 6.26.0
ipython                   8.12.3
ipywidgets                8.1.1
isoduration               20.11.0
itsdangerous              2.1.2
jedi                      0.19.1
Jinja2                    3.1.2
joblib                    1.3.2
json5                     0.9.14
jsonpointer               2.4
jsonschema                4.19.2
jsonschema-specifications 2023.7.1
jupyter                   1.0.0
jupyter_client            8.6.0
jupyter-console           6.6.3
jupyter_core              5.5.0
jupyter-events            0.9.0
jupyter-lsp               2.2.0
jupyter_server            2.10.0
jupyter_server_terminals  0.4.4
jupyterlab                4.0.8
jupyterlab-pygments       0.2.2
jupyterlab_server         2.25.1
jupyterlab-widgets        3.0.9
kiwisolver                1.4.5
lap                       0.4.0
lazy_loader               0.3
lightning-utilities       0.9.0
lxml                      4.9.3
Markdown                  3.4.4
markdown-it-py            3.0.0
MarkupSafe                2.1.3
matplotlib                3.7.3
matplotlib-inline         0.1.6
mdurl                     0.1.2
mistune                   3.0.2
motmetrics                1.4.0
MouseInfo                 0.1.3
mpmath                    1.3.0
multidict                 6.0.4
nanodet                   1.0.0                d:\python_work\nanodet
natsort                   8.4.0
nbclient                  0.9.0
nbconvert                 7.11.0
nbformat                  5.9.2
nest-asyncio              1.5.8
networkx                  3.1
notebook                  7.0.6
notebook_shim             0.2.3
numpy                     1.23.5
oauthlib                  3.2.2
omegaconf                 2.3.0
onnx                      1.12.0
onnx-simplifier           0.4.33
onnxruntime               1.16.0
onnxruntime-gpu           1.16.0
onnxsim                   0.4.33
opencv-python             4.5.5.62
opt-einsum                3.3.0
overrides                 7.4.0
packaging                 23.1
paddle-bfloat             0.1.7
paddle2onnx               1.0.6
paddlelite                2.13rc0
paddlepaddle              2.5.0
paddlepaddle-gpu          2.5.0
paddleseg                 2.8.0                d:\python_work\paddleseg
paddleslim                2.5.0
pandas                    2.0.3
pandocfilters             1.5.0
parso                     0.8.3
pathtools                 0.1.2
pickleshare               0.7.5
Pillow                    10.0.0
pip                       23.2.1
pkgutil_resolve_name      1.3.10
platformdirs              3.11.0
prettytable               3.9.0
prometheus-client         0.18.0
prompt-toolkit            3.0.39
protobuf                  3.20.1
psutil                    5.9.5
pure-eval                 0.2.2
py-cpuinfo                9.0.0
pyaml                     23.9.7
pyasn1                    0.5.0
pyasn1-modules            0.3.0
PyAutoGUI                 0.9.54
pycocotools               2.0.7
pycparser                 2.21
pycryptodome              3.19.0
PyGetWindow               0.0.9
Pygments                  2.16.1
PyMsgBox                  1.0.9
pyparsing                 3.1.1
pyperclip                 1.8.2
PyQt5                     5.15.10
PyQt5-Qt5                 5.15.2
PyQt5-sip                 12.13.0
pyreadline3               3.4.1
PyRect                    0.2.0
PyScreeze                 0.1.30
python-dateutil           2.8.2
python-json-logger        2.0.7
pytorch-lightning         1.9.0
pytweening                1.0.7
pytz                      2023.3.post1
PyWavelets                1.4.1
pywin32                   306
pywinpty                  2.0.12
PyYAML                    6.0.1
pyzmq                     25.1.1
qtconsole                 5.5.0
QtPy                      2.4.1
rarfile                   4.1
referencing               0.30.2
requests                  2.31.0
requests-oauthlib         1.3.1
rfc3339-validator         0.1.4
rfc3986-validator         0.1.1
rich                      13.5.3
rpds-py                   0.12.0
rsa                       4.9
safetensors               0.4.0
scikit-image              0.21.0
scikit-learn              1.3.1
scipy                     1.10.1
seaborn                   0.13.0
Send2Trash                1.8.2
sentry-sdk                1.31.0
setproctitle              1.3.2
setuptools                68.2.2
six                       1.16.0
smmap                     5.0.1
sniffio                   1.3.0
soupsieve                 2.5
stack-data                0.6.3
strsimpy                  0.2.1
swig                      4.1.1
sympy                     1.12
tabulate                  0.9.0
tensorboard               2.14.0
tensorboard-data-server   0.7.1
termcolor                 2.3.0
terminado                 0.17.1
thop                      0.1.1.post2209072238
threadpoolctl             3.2.0
tifffile                  2023.7.10
timm                      0.9.10
tinycss2                  1.2.1
tomli                     2.0.1
torch                     1.9.1+cu111
torchaudio                0.10.1+cu111
torchmetrics              1.2.0
torchsummary              1.5.1
torchvision               0.10.1+cu111
tornado                   6.3.3
tqdm                      4.66.1
traitlets                 5.13.0
types-python-dateutil     2.8.19.14
typing_extensions         4.8.0
tzdata                    2023.3
ultralytics               8.0.196
uri-template              1.3.0
urllib3                   2.0.5
visualdl                  2.5.3
wandb                     0.16.0
wcwidth                   0.2.8
webcolors                 1.13
webencodings              0.5.1
websocket-client          1.6.4
Werkzeug                  3.0.0
wheel                     0.38.4
widgetsnbextension        4.0.9
x2paddle                  1.3.9
xlwt                      1.3.0
xmltodict                 0.13.0
yarl                      1.9.2
zipp                      3.17.0
@dengxiongshi dengxiongshi added the question Further information is requested label Dec 6, 2023
Copy link
Contributor

github-actions bot commented Dec 6, 2023

👋 Hello @dengxiongshi, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics

@dengxiongshi
Copy link
Author

The part is the export.py code I changed :

def parse_opt(known=False):
    parser = argparse.ArgumentParser()
    parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
    parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)')
    parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)')
    parser.add_argument('--batch-size', type=int, default=1, help='batch size')
    parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
    parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True')
    **parser.add_argument('--train', action='store_true', help='model.train() mode')**
    parser.add_argument('--keras', action='store_true', help='TF: use Keras')
    parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')
    parser.add_argument('--int8', action='store_true', help='CoreML/TF/OpenVINO INT8 quantization')
    parser.add_argument('--dynamic', action='store_true', help='ONNX/TF/TensorRT: dynamic axes')
    parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')
    parser.add_argument('--opset', type=int, default=10, help='ONNX: opset version')
    parser.add_argument('--verbose', action='store_true', help='TensorRT: verbose log')
    parser.add_argument('--workspace', type=int, default=4, help='TensorRT: workspace size (GB)')
    parser.add_argument('--nms', action='store_true', help='TF: add NMS to model')
    parser.add_argument('--agnostic-nms', action='store_true', help='TF: add agnostic NMS to model')
    parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep')
    parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep')
    parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold')
    parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold')
    parser.add_argument(
        '--include',
        nargs='+',
        default=['onnx'],
        help='torchscript, onnx, openvino, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle')
    opt = parser.parse_known_args()[0] if known else parser.parse_args()
    print_args(vars(opt))
    return opt

In run function:

 # Update model
    # model.eval()
    **model.train() if train else model.eval()**
    for k, m in model.named_modules():
        if isinstance(m, Detect):
            m.inplace = inplace
            m.dynamic = dynamic
            m.export = True

    for _ in range(2):
        y = model(im)  # dry runs
    if half and not coreml:
        im, model = im.half(), model.half()  # to FP16
    # shape = tuple((y[0] if isinstance(y, tuple) else y).shape)  # model output shape
    **shape = tuple(y[0].shape)**
    metadata = {'stride': int(max(model.stride)), 'names': model.names}  # model metadata
    LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} with output shape {shape} ({file_size(file):.1f} MB)")

In export_onnx function:

torch.onnx.export(
        model.cpu() if dynamic else model,  # --dynamic only compatible with cpu
        im.cpu() if dynamic else im,
        f,
        verbose=False,
        opset_version=opset,
        training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL,
        do_constant_folding=not train,  # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=False
        input_names=['images'],
        output_names=output_names,
        dynamic_axes=dynamic or None)

I use the yolov5-7.0 before add the --train in export.py, the export.py code is change like the yolov5-6.2.
The first export onnx is:

(NN) D:\python_work\yolov5>python export.py --weights runs\train\WI_PRW_SSW_SSM_20231127\weights\best.pt --train --simplify --opset 10
export: data=D:\python_work\yolov5\data\coco128.yaml, weights=['runs\\train\\WI_PRW_SSW_SSM_20231127\\weights\\best.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, train=True, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=10, verbose=False, workspa
ce=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['onnx']
YOLOv5  v7.0-240-g84ec8b5 Python-3.8.18 torch-1.9.1+cu111 CPU

Fusing layers... 
YOLOv5s_hs summary: 157 layers, 7351674 parameters, 0 gradients, 17.5 GFLOPs

PyTorch: starting from runs\train\WI_PRW_SSW_SSM_20231127\weights\best.pt with output shape (1, 3, 80, 80, 10) (14.3 MB)

ONNX: starting export with onnx 1.12.0...
ONNX: simplifying with onnx-simplifier 0.4.33...
ONNX: export success  2.2s, saved as runs\train\WI_PRW_SSW_SSM_20231127\weights\best.onnx (28.1 MB)

Export complete (2.6s)
Results saved to D:\python_work\yolov5\runs\train\WI_PRW_SSW_SSM_20231127\weights
Detect:          python detect.py --weights runs\train\WI_PRW_SSW_SSM_20231127\weights\best.onnx
Validate:        python val.py --weights runs\train\WI_PRW_SSW_SSM_20231127\weights\best.onnx
PyTorch Hub:     model = torch.hub.load('ultralytics/yolov5', 'custom', 'runs\train\WI_PRW_SSW_SSM_20231127\weights\best.onnx')
Visualize:       https://netron.app

I got three outputs:
image
the second export onnx is:

(NN) D:\python_work\yolov5>python export.py --weights runs\train\WI_PRW_SSW_SSM_20231127\weights\best.pt --simplify --opset 10
export: data=D:\python_work\yolov5\data\coco128.yaml, weights=['runs\\train\\WI_PRW_SSW_SSM_20231127\\weights\\best.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, train=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=10, verbose=False, worksp
ace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['onnx']
YOLOv5  v7.0-240-g84ec8b5 Python-3.8.18 torch-1.9.1+cu111 CPU

Fusing layers...
YOLOv5s_hs summary: 157 layers, 7351674 parameters, 0 gradients, 17.5 GFLOPs

PyTorch: starting from runs\train\WI_PRW_SSW_SSM_20231127\weights\best.pt with output shape (1, 25200, 10) (14.3 MB)

ONNX: starting export with onnx 1.12.0...
ONNX: simplifying with onnx-simplifier 0.4.33...
ONNX: export success  2.3s, saved as runs\train\WI_PRW_SSW_SSM_20231127\weights\best.onnx (28.5 MB)

Export complete (2.8s)
Results saved to D:\python_work\yolov5\runs\train\WI_PRW_SSW_SSM_20231127\weights
Detect:          python detect.py --weights runs\train\WI_PRW_SSW_SSM_20231127\weights\best.onnx
Validate:        python val.py --weights runs\train\WI_PRW_SSW_SSM_20231127\weights\best.onnx
PyTorch Hub:     model = torch.hub.load('ultralytics/yolov5', 'custom', 'runs\train\WI_PRW_SSW_SSM_20231127\weights\best.onnx')
Visualize:       https://netron.app

I got one output:
image

@dengxiongshi
Copy link
Author

dengxiongshi commented Dec 6, 2023

When I use the onnx file to test the accuracy by val.py. The first onnx get error:

(NN) D:\python_work\yolov5>python val.py --weights runs\train\WI_PRW_SSW_SSM_20231127\weights\best_train.onnx --device 0 --name train_mode
val: data=E:\downloads\compress\datasets\train_data\train_data.yaml, weights=['runs\\train\\WI_PRW_SSW_SSM_20231127\\weights\\best_train.onnx'], batch_size=16, imgsz=640, conf_thres=0.001, iou_thres=0.6, max_det=300, task=val, device=0, workers=0, single_cls=False, augment=False, verbose=False, save_txt=Fal
se, save_hybrid=False, save_conf=False, save_json=False, project=runs\val, name=train_mode, exist_ok=False, half=False, dnn=False
YOLOv5  v7.0-240-g84ec8b5 Python-3.8.18 torch-1.9.1+cu111 CUDA:0 (GeForce RTX 2060, 6144MiB)

Loading runs\train\WI_PRW_SSW_SSM_20231127\weights\best_train.onnx for ONNX Runtime inference...
Forcing --batch-size 1 square inference (1,3,640,640) for non-PyTorch models
val: Scanning E:\downloads\compress\datasets\train_data\labels\val.cache... 2575 images, 0 backgrounds, 0 corrupt: 100%|██████████| 2575/2575 [00:00<?, ?it/s]
                 Class     Images  Instances          P          R      mAP50   mAP50-95:   0%|          | 1/2575 [00:00<05:14,  8.18it/s]Exception in thread Thread-3:
Traceback (most recent call last):
  File "D:\Anaconda3\envs\NN\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "D:\Anaconda3\envs\NN\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "D:\python_work\yolov5\utils\plots.py", line 175, in plot_images
    annotator.box_label(box, label, color=color)
  File "D:\Anaconda3\envs\NN\lib\site-packages\ultralytics\utils\plotting.py", line 108, in box_label
    self.draw.rectangle(box, width=self.lw, outline=color)  # box
  File "D:\Anaconda3\envs\NN\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "D:\python_work\yolov5\utils\plots.py", line 175, in plot_images
    annotator.box_label(box, label, color=color)
  File "D:\Anaconda3\envs\NN\lib\site-packages\ultralytics\utils\plotting.py", line 108, in box_label
    self.draw.rectangle(box, width=self.lw, outline=color)  # box
  File "D:\Anaconda3\envs\NN\lib\site-packages\PIL\ImageDraw.py", line 294, in rectangle
    self.draw.draw_rectangle(xy, ink, 0, width)
ValueError: x1 must be greater than or equal to x0
                 Class     Images  Instances          P          R      mAP50   mAP50-95:   0%|          | 3/2575 [00:00<03:04, 13.96it/s]Exception in thread Thread-7:
Traceback (most recent call last):
  File "D:\Anaconda3\envs\NN\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "D:\Anaconda3\envs\NN\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "D:\python_work\yolov5\utils\plots.py", line 175, in plot_images
    annotator.box_label(box, label, color=color)
  File "D:\Anaconda3\envs\NN\lib\site-packages\ultralytics\utils\plotting.py", line 108, in box_label
    self.draw.rectangle(box, width=self.lw, outline=color)  # box
  File "D:\Anaconda3\envs\NN\lib\site-packages\PIL\ImageDraw.py", line 294, in rectangle
    self.draw.draw_rectangle(xy, ink, 0, width)
ValueError: x1 must be greater than or equal to x0
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 2575/2575 [01:23<00:00, 30.71it/s]
                   all       2575      30443          0          0          0          0
Speed: 0.4ms pre-process, 12.5ms inference, 0.8ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\val\train_mode2

the second onnx can get success:

(NN) D:\python_work\yolov5>python val.py --weights runs\train\WI_PRW_SSW_SSM_20231127\weights\best.onnx --device 0 --name train_no
val: data=E:\downloads\compress\datasets\train_data\train_data.yaml, weights=['runs\\train\\WI_PRW_SSW_SSM_20231127\\weights\\best.onnx'], batch_size=16, imgsz=640, conf_thres=0.001, iou_thres=0.6, max_det=300, task=val, device=0, workers=0, single_cls=False, augment=False, verbose=False, save_txt=False, sa
ve_hybrid=False, save_conf=False, save_json=False, project=runs\val, name=train_no, exist_ok=False, half=False, dnn=False
YOLOv5  v7.0-240-g84ec8b5 Python-3.8.18 torch-1.9.1+cu111 CUDA:0 (GeForce RTX 2060, 6144MiB)

Loading runs\train\WI_PRW_SSW_SSM_20231127\weights\best.onnx for ONNX Runtime inference...
Forcing --batch-size 1 square inference (1,3,640,640) for non-PyTorch models
val: Scanning E:\downloads\compress\datasets\train_data\labels\val.cache... 2575 images, 0 backgrounds, 0 corrupt: 100%|██████████| 2575/2575 [00:00<?, ?it/s]
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 2575/2575 [01:32<00:00, 27.76it/s]
                   all       2575      30443      0.807      0.719      0.771       0.51
                  face       2575       6954      0.835      0.687      0.743      0.352
                person       2575      19192      0.814      0.769      0.795      0.471
                   car       2575       4012      0.868      0.833      0.888      0.671
                   bus       2575        187      0.799      0.791      0.835      0.616
                 truck       2575         98      0.717      0.517      0.597      0.439
Speed: 0.4ms pre-process, 12.6ms inference, 1.0ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\val\train_no2

The pt file also get the right result:

(NN) D:\python_work\yolov5>python val.py --weights runs\train\WI_PRW_SSW_SSM_20231127\weights\best.pt --device 0 --name best_pt
val: data=E:\downloads\compress\datasets\train_data\train_data.yaml, weights=['runs\\train\\WI_PRW_SSW_SSM_20231127\\weights\\best.pt'], batch_size=16, imgsz=640, conf_thres=0.001, iou_thres=0.6, max_det=300, task=val, device=0, workers=0, single_cls=False, augment=False, verbose=False, save_txt=False, save
_hybrid=False, save_conf=False, save_json=False, project=runs\val, name=best_pt, exist_ok=False, half=False, dnn=False
YOLOv5  v7.0-240-g84ec8b5 Python-3.8.18 torch-1.9.1+cu111 CUDA:0 (GeForce RTX 2060, 6144MiB)

Fusing layers...
YOLOv5s_hs summary: 157 layers, 7351674 parameters, 0 gradients, 17.5 GFLOPs
val: Scanning E:\downloads\compress\datasets\train_data\labels\val.cache... 2575 images, 0 backgrounds, 0 corrupt: 100%|██████████| 2575/2575 [00:00<?, ?it/s]
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 161/161 [01:12<00:00,  2.21it/s]
                   all       2575      30443      0.826      0.717      0.774      0.513
                  face       2575       6954      0.836      0.683      0.741      0.352
                person       2575      19192      0.826      0.762      0.796      0.473
                   car       2575       4012      0.869      0.832      0.889      0.678
                   bus       2575        187      0.835      0.783      0.831      0.623
                 truck       2575         98      0.762      0.524      0.614      0.441
Speed: 0.1ms pre-process, 4.2ms inference, 0.7ms NMS per image at shape (16, 3, 640, 640)
Results saved to runs\val\best_pt

I alse get the same question when use the yolov5-6.2.
Another, how can get one output by reshape and concat from three outputs in the first export onnx, my pt file is here. The first onnx file is best_train.zip, the second onnx file is best.zip

@glenn-jocher
Copy link
Member

@dengxiongshi it looks like you encountered an error while trying to validate your ONNX model using val.py. The issue seems to occur with your first ONNX model, while the second ONNX model and the PyTorch (pt) model generated successful results.

Regarding your question about reshaping and concatenating the three outputs in the first exported ONNX file, you might find it helpful to refer to the Ultralytics YOLOv5 documentation for guidance on working with ONNX models and managing model outputs.

It's great to see you've successfully obtained results with the second ONNX model and the PyTorch model! If you need further assistance in troubleshooting the issue with the first ONNX model, feel free to provide additional details, and the community will be happy to help.

Copy link
Contributor

github-actions bot commented Jan 6, 2024

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Jan 6, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants