Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apple MPS inference with detect.py #9596

Closed
1 of 2 tasks
lightonthefloor opened this issue Sep 26, 2022 · 9 comments · Fixed by #9600
Closed
1 of 2 tasks

Apple MPS inference with detect.py #9596

lightonthefloor opened this issue Sep 26, 2022 · 9 comments · Fixed by #9600
Labels
bug Something isn't working

Comments

@lightonthefloor
Copy link

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

Detection

Bug

When I run the command: python detect.py --source 0 --device mps, there is no detections. But when I run by using cpu, it has.

Environment

  • Yolo v5-6.2
  • macOS 12.6
  • conda 4.14
  • python 3.10.4

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@lightonthefloor lightonthefloor added the bug Something isn't working label Sep 26, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Sep 26, 2022

👋 Hello @lightonthefloor, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@lightonthefloor MPS inference is waiting on a few remaining operations to be integrated into torch nightly, so you should receive an error similar to this:

python detect.py --source 0 --device mps

detect: weights=yolov5s.pt, source=0, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=mps, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 v6.2-154-g5c34c4c Python-3.9.0 torch-1.12.1 MPS

Fusing layers... 
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
1/1: 0...  Success (inf frames 1920x1080 at 15.00 FPS)

Traceback (most recent call last):
  File "/Users/glennjocher/PycharmProjects/yolov5/detect.py", line 255, in <module>
    main(opt)
  File "/Users/glennjocher/PycharmProjects/yolov5/detect.py", line 250, in main
    run(**vars(opt))
  File "/Users/glennjocher/PycharmProjects/yolov5/venv39/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/Users/glennjocher/PycharmProjects/yolov5/detect.py", line 126, in run
    pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
  File "/Users/glennjocher/PycharmProjects/yolov5/utils/general.py", line 843, in non_max_suppression
    x = x[xc[xi]]  # confidence
NotImplementedError: The operator 'aten::index.Tensor' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.

@lightonthefloor
Copy link
Author

Nope. I have already set this environment variable.

@lightonthefloor
Copy link
Author

`(yolo-v5_test) trisoil@Trisoils-MacBook-Pro yolov5-master % python detect.py --source 0 --device mps
detect: weights=yolov5s.pt, source=0, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=mps, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False
YOLOv5 🚀 2022-9-17 Python-3.10.4 torch-1.12.1 MPS

Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
[ WARN:0@1.789] global /private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_c6qq9eqk9d/croots/recipe/opencv-suite_1663872527491/work/modules/videoio/src/cap_gstreamer.cpp (862) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
1/1: 0... Success (inf frames 1920x1080 at 15.00 FPS)

/Users/trisoil/OneDrive - bupt.edu.cn/College Learning/Robocon/yolov5-master/utils/general.py:838: UserWarning: The operator 'aten::index.Tensor' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1659484612588/work/aten/src/ATen/mps/MPSFallback.mm:11.)
x = x[xc[xi]] # confidence

(:84503): GStreamer-CRITICAL **: 19:13:30.121: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed
[ WARN:0@3.850] global /private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_c6qq9eqk9d/croots/recipe/opencv-suite_1663872527491/work/modules/videoio/src/cap_gstreamer.cpp (2180) open OpenCV | GStreamer warning: cannot link elements
0: 384x640 (no detections), 167.8ms
0: 384x640 (no detections), 17.8ms
0: 384x640 (no detections), 14.6ms
0: 384x640 (no detections), 15.4ms
`

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 26, 2022

@lightonthefloor I'm getting a different error message if I use the fallback. Do you see the same thing? I'm using latest nightly.

PYTORCH_ENABLE_MPS_FALLBACK=1 python detect.py --device mps

Screenshot 2022-09-26 at 13 42 29

@lightonthefloor
Copy link
Author

No. And I am not using the latest nightly. My torch version is 1.12.1

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 26, 2022

@lightonthefloor if you really want to use MPS today you can add this fix (insert predictions = predictions.cpu() in NMS where shown). This eliminates the need to pass the fallback flag, but it passes all predictions to CPU before NMS, which also takes time. My metrics on an M2 MacBook with this change are here. Note also that MPS is slow for the first image and then very fast on subsequent images of the same size.

Screenshot 2022-09-26 at 13 51 52

detect: weights=yolov5s.pt, source=data/images, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=mps, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 v6.2-171-gbd9c0c4 Python-3.10.6 torch-1.13.0.dev20220926 MPS

Fusing layers... 
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
image 1/3 /Users/glennjocher/PycharmProjects/yolov5/data/images/bus.jpg: 640x480 9 persons, 2 buss, 143.4ms
image 2/3 /Users/glennjocher/PycharmProjects/yolov5/data/images/zidane.jpg: 384x640 5 persons, 5 ties, 189.1ms
video 3/3 (1/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 184.8ms
video 3/3 (2/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 12.8ms
video 3/3 (3/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 11.2ms
video 3/3 (4/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 13.9ms
video 3/3 (5/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 16.7ms
video 3/3 (6/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 8.4ms
video 3/3 (7/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 8.5ms
video 3/3 (8/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 7.8ms
video 3/3 (9/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 7.6ms
video 3/3 (10/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 1 person, 8.1ms
video 3/3 (11/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 7.6ms
video 3/3 (12/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 8.1ms
video 3/3 (13/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 3 persons, 7.7ms
video 3/3 (14/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 3 persons, 8.0ms
video 3/3 (15/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 3 persons, 11.3ms
video 3/3 (16/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 3 persons, 7.8ms
video 3/3 (17/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 7.5ms
video 3/3 (18/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 1 person, 8.1ms
video 3/3 (19/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 2 trucks, 8.0ms
video 3/3 (20/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 11 persons, 8.0ms
video 3/3 (21/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 9 persons, 7.8ms
video 3/3 (22/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 9 persons, 7.9ms
video 3/3 (23/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 10 persons, 7.7ms
video 3/3 (24/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 7.9ms
video 3/3 (25/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 7.8ms
video 3/3 (26/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 3 persons, 8.0ms
Speed: 1.2ms pre-process, 26.5ms inference, 9.7ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs/detect/exp27

@glenn-jocher
Copy link
Member

Equivalent CPU times looks like this. Faster NMS but much slower inference.

detect: weights=yolov5s.pt, source=data/images, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 v6.2-171-gbd9c0c4 Python-3.10.6 torch-1.13.0.dev20220926 CPU

Fusing layers... 
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
image 1/3 /Users/glennjocher/PycharmProjects/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 161.2ms
image 2/3 /Users/glennjocher/PycharmProjects/yolov5/data/images/zidane.jpg: 384x640 2 persons, 2 ties, 130.0ms
video 3/3 (1/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 139.6ms
video 3/3 (2/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 151.8ms
video 3/3 (3/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 136.5ms
video 3/3 (4/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 134.6ms
video 3/3 (5/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 145.0ms
video 3/3 (6/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 131.3ms
video 3/3 (7/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 136.9ms
video 3/3 (8/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 166.2ms
video 3/3 (9/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 131.6ms
video 3/3 (10/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 1 person, 135.5ms
video 3/3 (11/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 160.2ms
video 3/3 (12/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 145.7ms
video 3/3 (13/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 1 person, 143.7ms
video 3/3 (14/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 1 person, 143.1ms
video 3/3 (15/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 1 person, 176.4ms
video 3/3 (16/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 1 person, 179.0ms
video 3/3 (17/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 154.8ms
video 3/3 (18/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 1 person, 176.2ms
video 3/3 (19/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 1 truck, 164.1ms
video 3/3 (20/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 5 persons, 152.0ms
video 3/3 (21/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 3 persons, 147.0ms
video 3/3 (22/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 3 persons, 209.0ms
video 3/3 (23/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 4 persons, 173.6ms
video 3/3 (24/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 109.8ms
video 3/3 (25/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 (no detections), 104.3ms
video 3/3 (26/26) /Users/glennjocher/PycharmProjects/yolov5/data/images/IMG_2590_rotate180.MOV: 640x384 1 person, 167.0ms
Speed: 0.6ms pre-process, 150.2ms inference, 0.8ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs/detect/exp40

glenn-jocher added a commit that referenced this issue Sep 26, 2022
Until more ops are fully supported this update will allow for seamless MPS inference (but slower MPS to CPU transfer before NMS, so slower NMS times).

Partially resolves #9596

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
@glenn-jocher glenn-jocher changed the title Not detected Apple MPS inference with detect.py Sep 26, 2022
@glenn-jocher
Copy link
Member

@lightonthefloor good news 😃! Your original issue may now be fixed ✅ in PR #9600. To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

glenn-jocher added a commit that referenced this issue Sep 26, 2022
Until more ops are fully supported this update will allow for seamless MPS inference (but slower MPS to CPU transfer before NMS, so slower NMS times).

Partially resolves #9596

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants