Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError on an Apple M1 #6975

Closed
1 task done
DP1701 opened this issue Mar 14, 2022 · 17 comments
Closed
1 task done

AttributeError on an Apple M1 #6975

DP1701 opened this issue Mar 14, 2022 · 17 comments
Labels
question Further information is requested Stale

Comments

@DP1701
Copy link

DP1701 commented Mar 14, 2022

Search before asking

Question

Hello @glenn-jocher,

do you also get this message when you run Pytorch 1.11 CPU on an Apple M1?

#This command was executed
python val.py --weights /Users/work/Desktop/best.pt --data /Users/work/Desktop/dataset.yaml --task train --iou 0.65 --img 1280 --name test
 File "/Users/work/.conda/envs/YOLOv5_env/lib/python3.9/site-packages/torch/storage.py", line 520, in _free_weak_ref
AttributeError: 'NoneType' object has no attribute '_free_weak_ref'
Exception ignored in: <function StorageWeakRef.__del__ at 0x113d13820>
Traceback (most recent call last):
  File "/Users/work/.conda/envs/YOLOv5_env/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 36, in __del__
  File "/Users/work/.conda/envs/YOLOv5_env/lib/python3.9/site-packages/torch/storage.py", line 520, in _free_weak_ref
AttributeError: 'NoneType' object has no attribute '_free_weak_ref'

The message appears only after the vaildation is completed.

Additional

No response

@DP1701 DP1701 added the question Further information is requested label Mar 14, 2022
@glenn-jocher
Copy link
Member

@DP1701 I think this is a PyTorch issue unrelated to YOLOv5, i.e. pytorch/pytorch#74016

@DP1701
Copy link
Author

DP1701 commented Mar 16, 2022

@glenn-jocher Thanks for reporting it.

Nevertheless, Pytorch 1.11 has brought a significant performance increase on the M1 chip.

@zhiqwang
Copy link
Contributor

I guess you're using anaconda. And you can try miniconda instead of anaconda. Anaconda does not support M1 very well.

@DP1701
Copy link
Author

DP1701 commented Mar 16, 2022

@zhiqwang I use miniforge, which I installed via homebrew. But I installed Pytorch using pip3 inside a conda environment.

@glenn-jocher
Copy link
Member

@DP1701 I haven't seen any performance increases with PyTorch on M1 using pip install. What install method are you using to see performance improvements?

I do see that M1 can run CoreML models (but not PyTorch models) extremely fast. Results show 13X speedup vs CPU on base 2020 M1 Macbook Air:

Results

YOLOv5 🚀 v6.1-25-gcaf7ad0 torch 1.11.0 CPU

YOLOv5s  inference time (640x640 image)
PyTorch 1.11.0 CPU 344 ms
CoreML 5.2.0 27 ms

Reproduce

git clone https://github.com/ultralytics/yolov5
cd yolov5
pip install -r requirements.txt

python export.py --weights yolov5s.pt --include coreml

python detect.py --weights yolov5s.pt
python detect.py --weights yolov5s.mlmodel

@DP1701
Copy link
Author

DP1701 commented Mar 18, 2022

@glenn-jocher I jumped from version 1.9 directly to 1.11. At the beginning I had installed Pytorch via conda directly, because it was recommended by some others (e.g. here towardsdatascience.com ).
A quick comparison:
1050 images, 3 classes, YOLOv5m6 -> val.py
1.9.1 version -> ~52 min for execution
1.11 version -> ~6 min for execution

@glenn-jocher
Copy link
Member

@DP1701 ah ok got it, thanks! I will try a conda install.

@glenn-jocher
Copy link
Member

glenn-jocher commented Mar 19, 2022

@DP1701 I'm not sure I understand. From your above comment you say you pip installed PyTorch 1.11 in a Conda environment and observed performance increases? This is not my case on M1 with 1.11. Can you post a screenshot of your python detect.py output on M1?

@DP1701
Copy link
Author

DP1701 commented Mar 22, 2022

@glenn-jocher I guess I did not explain this correctly. At the beginning I installed PyTorch 1.9.1 with conda, because the tutorial page recommended this.

I used this command from the PyTorch page:

conda install pytorch==1.9.1 torchvision==0.10.1 -c pytorch

Until recently, I stuck with version 1.9.1 because it just worked fine. After I found this entry on the PyTorch page:

# MacOS Conda binaries are for x86_64 only, for M1 please use wheels
conda install pytorch torchvision torchaudio -c pytorch

I switched to version 1.11. I just created a new conda environment but installed PyTorch with pip in it. So no conda install used.

I have attached two screenshots.

With PyTorch 1.9.1
with_torch_1_9_1

With PyTorch 1.11
with_torch_1_11

I have therefore noticed a performance improvement.

@glenn-jocher
Copy link
Member

@DP1701 ah ok got it! I installed torch with pip also. On MacBook M1 I see 344 ms as the average for python detect.py. Could you run the default command python detect.py and report your average? Also what type of M1 chip do you have (there are 4 now: M1, M1 Pro, M1 Max, M1 Ultra).

@DP1701
Copy link
Author

DP1701 commented Mar 22, 2022

@glenn-jocher When I run the default command, I get the following result:

python detect.py --weights yolov5s.pt

results

I have the M1 Max.

@DP1701
Copy link
Author

DP1701 commented Mar 22, 2022

@glenn-jocher I have exported my model to CoreML. Is much faster, but unfortunately something is wrong. The class names are not the right ones and I can't change the image size.

python export.py --weights best.pt --include coreml

best.pt
torch

best.mlmodel
coreML

When I want to change the image size, I get this error:

python detect.py --weights best.mlmodel --source test_video.mp4 --imgsz 1280 --conf 0.5 

picture

@glenn-jocher
Copy link
Member

@DP1701 wow!!! M1 Max is superfast. This is really interesting, we should establish some benchmarks on the full M1 range.

CoreML models are only capable of fixed inference sizes, i.e. so if you export at --640 then you can only run inference at --640.

@DP1701
Copy link
Author

DP1701 commented Mar 22, 2022

@glenn-jocher If you want, I can run some benchmarks on the M1 Max.

Ahh, I understand. Can I export the model so that it runs the inference test on an image size of 1280x720?

Do you have any idea why the object classes are wrong in the CoreML model?

@glenn-jocher
Copy link
Member

@DP1701 for CoreML models you should also pass your --data to detect.py so it can read the names from it (they aren't attached to the model like with PyTorch models).

I think you should be able to export.py --img 1280 720 and then try detect.py --img 1280 720, but I haven't tried myself.

@DP1701
Copy link
Author

DP1701 commented Mar 23, 2022

@glenn-jocher The object classes are now displayed correctly. Thanks for the hint!
Regarding the image size, unfortunately it did not work. Here is what I did:

python export.py --weights best.pt --include coreml --imgsz 720 1280
python detect.py --weights best.mlmodel --source test_video.mp4 --imgsz 720 1280 --conf 0.5 --data names.yaml

picture

If I export directly with --imgsz 768 1280 (in the ArgumentParser is written -> help='inference size h,w' ) then it works. This is probably related to the rectangular inference or am I wrong?

BTW: I think we might get a PyTorch version with GPU support at the next WWDC. Then it could be even better.

@github-actions
Copy link
Contributor

github-actions bot commented Apr 23, 2022

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

3 participants