Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using YOLOv5 with Neural Compute Stick2 #552

Closed
hghari opened this issue Jul 29, 2020 · 33 comments · Fixed by #6057
Closed

Using YOLOv5 with Neural Compute Stick2 #552

hghari opened this issue Jul 29, 2020 · 33 comments · Fixed by #6057
Labels
question Further information is requested Stale

Comments

@hghari
Copy link

hghari commented Jul 29, 2020

❔Question

Hello, I have successfully converted the trained yolov5 model to Intermediate representation to use it with NCS2. However when I load the model to ncs2 it gives wrong results which are all negative values. Loading the same model on CPU runs without any problem and gives correct values. The question is can yolov5 be used in NCS2 and if yes what are the right steps to make it work correctly?
Thanks in advance

Additional context

@hghari hghari added the question Further information is requested label Jul 29, 2020
@glenn-jocher
Copy link
Member

@hghari I'm not qualified to answer this as I have no experience with the cited hardware, but I'll leave this open for community support! Good luck.

@Jacobsolawetz
Copy link
Contributor

@hghari I'm working on this direction as well - though I don't yet have a solution.

Let's stay in touch throughout the process :D

Did you start by converting ONXX to OpenVINO?

@hghari
Copy link
Author

hghari commented Jul 30, 2020

@glenn-jocher thanks
@Jacobsolawetz I would like to. thanks. Exactly I use export.py to convert the model to ONNX (with opset =10) and then use openvino to convert this model to IR (bin and xml).

@Jacobsolawetz
Copy link
Contributor

@hghari, very nice, @jimsu2012 and I did a similar conversion.

We just received the NCS in the mail today so will be trying to deploy in the next few days.

We will keep you posted of any success there!

@hghari
Copy link
Author

hghari commented Jul 30, 2020

@Jacobsolawetz Looking forward to hear from you.

@hghari
Copy link
Author

hghari commented Aug 2, 2020

@Jacobsolawetz hi, I gave up using yolo v5 model because of inconsistencies between cpu and ncs2 results. Please inform me if you had any success. thanks

@Jacobsolawetz
Copy link
Contributor

@hghari makes sense - none yet. Will post here if i find some success

@alrzmshy
Copy link

I am working on this issue as well. There are two problems:

  1. there's a bug in the code, the part on self.export and self.training somehow don't work as they should. When you put self.export = True it does not set the self.training value as False. Therefore you only get the bounding boxes, i.e. three outputs of size 1x3x80x80x9, 1x3x40x40x9 and 1x3x20x20x9 and I have checked and they match (with a good approximation) with the outputs of the PyTorch model.

  2. If you resolve the part about self.export and self.training then you can convert the model successfully with opset =11 but the ONNX conversion fails with opset = 10.

@usamahjundia
Copy link

@hghari hi, which model did you use to be able to convert into onnx and eventually into the IR of openvino? im using openvino 2020.1, pytorch 1.5 and seems like im stuck on converting the onnx model of yolov5s (which i edited the export script into opset 10) to openvino

@hghari
Copy link
Author

hghari commented Aug 15, 2020

I used the model provided on github

@yurikleb
Copy link

yurikleb commented Sep 8, 2020

The same struggle here, please post any progress you might have!

@usamahjundia
Copy link

The same struggle here, please post any progress you might have!

using the latest openvino, i managed to convert to IR, although weird behavior as mentioned in this response

I am working on this issue as well. There are two problems:

  1. there's a bug in the code, the part on self.export and self.training somehow don't work as they should. When you put self.export = True it does not set the self.training value as False. Therefore you only get the bounding boxes, i.e. three outputs of size 1x3x80x80x9, 1x3x40x40x9 and 1x3x20x20x9 and I have checked and they match (with a good approximation) with the outputs of the PyTorch model.
  2. If you resolve the part about self.export and self.training then you can convert the model successfully with opset =11 but the ONNX conversion fails with opset = 10.

i decided to not use yolov5 and went for v4 instead, but i think you will have to play with the export script to make it functional

@github-actions
Copy link
Contributor

github-actions bot commented Oct 9, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the Stale label Oct 9, 2020
@yurikleb
Copy link

Didn't try but this article seems to deal with the same problem:
https://medium.com/analytics-vidhya/the-battle-to-run-my-custom-network-on-a-movidius-myriad-compute-stick-c7c01fb64126

@violet17
Copy link

@hghari Hi, how to convert yolov5 to openVINO? Could you share the methods? Thanks.

@Sanoronas
Copy link

I may be late for the party, but I managed to run a yolov5 network on the NCS2.
These were the steps that worked for me:

  • export model to ONNX with export script (arguments: --img-size 640 --batch-size 1)
  • convert to openvino IR with mo.py --input_model my_model.onnx -s 255 --data_type FP16 --output_dir ir_dir

The generated IR should run on the NCS2 and return the same output as CPU inference

@Rainbowman0
Copy link

I may be late for the party, but I managed to run a yolov5 network on the NCS2.
These were the steps that worked for me:

  • export model to ONNX with export script (arguments: --img-size 640 --batch-size 1)
  • convert to openvino IR with mo.py --input_model my_model.onnx -s 255 --data_type FP16 --output_dir ir_dir

The generated IR should run on the NCS2 and return the same output as CPU inference

Brother, I can't get the correct result using the method you said. When using mo.py to convert to an IR model, if you don't add "-s 255", the model I get can detect the correct result on the CPU, but the correct result cannot appear on the NCS2. But after adding "-s 255", I can't detect the correct result on the CPU and NCS2.

@Sanoronas
Copy link

I may be late for the party, but I managed to run a yolov5 network on the NCS2.
These were the steps that worked for me:

  • export model to ONNX with export script (arguments: --img-size 640 --batch-size 1)
  • convert to openvino IR with mo.py --input_model my_model.onnx -s 255 --data_type FP16 --output_dir ir_dir

The generated IR should run on the NCS2 and return the same output as CPU inference

Brother, I can't get the correct result using the method you said. When using mo.py to convert to an IR model, if you don't add "-s 255", the model I get can detect the correct result on the CPU, but the correct result cannot appear on the NCS2. But after adding "-s 255", I can't detect the correct result on the CPU and NCS2.

The flag -s 255 sets the expected scale of the input image. I guess you perform a normalization of the image to range 0-1 before inference (something like img/=255). Make sure your input has range 0-255 by excluding this normalization when using a model converted with -s 255. Without -s 255 use range 0-1 instead.

@violet17
Copy link

violet17 commented Oct 15, 2021

@hghari
The conversion of yolov5 to OenVINO can be refered to yolov5_demo.
The results of NCS may be different from CPU, because the different optimization method.
If you want to get correct results on NCS, please contact me.

@glenn-jocher
Copy link
Member

@violet17 @Jacobsolawetz @yurikleb @hghari @usamahjundia good news 😃! Your original issue may now be fixed ✅ in PR #6057. This PR adds native YOLOv5 OpenVINO export:

python export.py --weights yolov5s.pt --include openvino  # export to OpenVINO

image

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@glenn-jocher glenn-jocher linked a pull request Dec 23, 2021 that will close this issue
@ca-schue
Copy link

ca-schue commented Jan 9, 2022

I am trying to get my own Yolov5L model running on the Raspberry Pi (B, v1.2) with the NCS2. But I get extremely bad results. Is it normal that the NCS2 performs so much worse than a CPU?

Compared to the output from the CPU inference, the decimal places from the NCS2 results are very inaccurate. Does this have something to do with the FP16 conversion?

Can anyone give me tips for the yolov5 inference workflow in Python on the NCS2? I already exported the model as FP16 and followed the structure of detect.py. But the results are so bad...

@Averen19
Copy link

@ca-schue Hi, im trying to run yolo model on raspberry pi with ncs2 as well. Did you manage to do it? If so, do you mind sharing your code to carry out inferencing?

@Rainbowman0
Copy link

Yes I have done it. This blog records the details and codes. But if conditions permit, I strongly do not recommend using the Raspberry Pi plus NCS2 solution. The speed is reallly slow even with NCS2 (about 2fps). Maybe Jetson Nano is a better solution(about 15fps without any acceleration).

@Averen19
Copy link

@Rainbowman0, Hi thanks for the reply, do you have an english version of this document, as I can't fully access this website and I do not speak Chinese. If possible, can you provide your emails or contacts as I have a few questions that I would like to ask?

@Averen19
Copy link

@Rainbowman0 when I try to convert from ONNX to IR I get the following error, do you know how to solve it?

C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer>python mo.py --input_model=yolov5s.onnx --model_name yolov5OV --scale=255 --data_type=FP16
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer\yolov5s.onnx
- Path for generated IR: C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer.
- IR output name: yolov5OV
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: 255.0
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
ONNX specific parameters:
- Inference Engine found in: C:\Program Files (x86)\Intel\openvino_2021.4.582\python\python3.9\openvino
Inference Engine version: 2021.4.0-3839-cd81789d294-releases/2021/4
Model Optimizer version: 2021.4.0-3839-cd81789d294-releases/2021/4
[ WARNING ] Const node 'Resize_118/Add_input_port_1/value301313666' returns shape values of 'float64' type but it must be integer or float32. During Elementwise type inference will attempt to cast to float32
[ WARNING ] Const node 'Resize_140/Add_input_port_1/value304713669' returns shape values of 'float64' type but it must be integer or float32. During Elementwise type inference will attempt to cast to float32
[ WARNING ] Changing Const node 'Resize_118/Add_input_port_1/value301313972' data type from float16 to <class 'numpy.float32'> for Elementwise operation
[ WARNING ] Changing Const node 'Resize_140/Add_input_port_1/value304714149' data type from float16 to <class 'numpy.float32'> for Elementwise operation
[ ERROR ] -------------------------------------------------
[ ERROR ] ----------------- INTERNAL ERROR ----------------
[ ERROR ] Unexpected exception happened.
[ ERROR ] Please contact Model Optimizer developers and forward the following information:
[ ERROR ] [Errno 13] Permission denied: 'C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer\.\yolov5OV_tmp.bin'
[ ERROR ] Traceback (most recent call last):
File "C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer\mo\main.py", line 394, in main
ret_code = driver(argv)
File "C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer\mo\main.py", line 356, in driver
ret_res = emit_ir(prepare_ir(argv), argv)
File "C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer\mo\main.py", line 268, in emit_ir
prepare_emit_ir(graph=graph,
File "C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer\mo\pipeline\common.py", line 213, in prepare_emit_ir
serialize_constants(graph, bin_file)
File "C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer\mo\back\ie_ir_ver_2\emitter.py", line 38, in serialize_constants
with open(bin_file_name, 'wb') as bin_file:
PermissionError: [Errno 13] Permission denied: 'C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer\.\yolov5OV_tmp.bin'

[ ERROR ] ---------------- END OF BUG REPORT --------------
[ ERROR ] -------------------------------------------------

@Humni
Copy link

Humni commented Mar 29, 2022

Can you just tell us how to fix it?

@hghari The conversion of yolov5 to OenVINO can be refered to yolov5_demo. The results of NCS may be different from CPU, because the different optimization method. If you want to get correct results on NCS, please contact me.

@ca-schue
Copy link

ca-schue commented Apr 1, 2022

@Rainbowman0 when I try to convert from ONNX to IR I get the following error, do you know how to solve it?

C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer>python mo.py --input_model=yolov5s.onnx --model_name yolov5OV --scale=255 --data_type=FP16

The command is correct. The reason for my poor results was that I forgot the -s 255 parameter to normalize the color space. It is important that python (or Anacoda) is run as administrator/root.

Can you just tell us how to fix it?

@hghari The conversion of yolov5 to OenVINO can be refered to yolov5_demo. The results of NCS may be different from CPU, because the different optimization method. If you want to get correct results on NCS, please contact me.

I think @violet17 is talking about non max suppression. The code for nms from yolov5/general.py should work. Strangely, the inference behaves differently for images over about 1000 px. For example, if inference is done five times in a row on p6 models like yolov5s6 at 1280 px with the same image, only the result of every second inference is correct. I think there is an overflow or memory leak somewhere in openvino.

@glennford49
Copy link

I have successfully convert the onnx model to openvino model, running the detect.py using weights yolov5_openvino_model work great, however when i used the xml,bin to openvino environment with object detection code provided by intel, it just return a black screen in cv2.imshow(), any idea on this?

@glenn-jocher
Copy link
Member

@glennford49 if you have problems running Intel code you should probably raise that with Intel

@Averen19
Copy link

@glennford49 which OpenVINO environment are you using? the current export.py --include OpenVINO converts to openvino2022 version. Are you running openVINO on Windows or any other system? If you're using OpenVINO 2021 or previous version, you need to convert to ONNX and use model optimizer from openVINO to convert to the IR format. You may refer to this thread if you're interested in how I managed to solve my problem.https://github.com/openvinotoolkit/openvino/issues/11458

@barney2074
Copy link

Hi @glennford49 @Averen19 @Humni @ca-schue @Sanoronas @violet17 @hghari

sorry to resurrect an old thread.
Did anyone ever get inferencing running with an NCS2 ?

I've got an NCS2- but the documentation from Intel is absolutely dreadful (in my opinion)
I've never been able to put it to use & I've spent a fair amount of time trying

Maybe I should just give up & use my DepthAI/Luxonis device or Jetson ??

Andrew

@bt5-coder
Copy link

I have the same problem as you, and this troubled me two days. I am using yolov5 tag v4.
Finally, this thread can help a lot.
openvinotoolkit/openvino#11458
update your yolov5 to tag v6.1, follow up below commands.

wget https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt
git clone https://github.com/ultralytics/yolov5
pip install -r yolov5\requirements.txt
pip install onnx
python yolov5\export.py --weights yolov5s.pt --include onnx
python "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\mo.py" --input_model yolov5s.onnx --scale 255 --reverse_input_channels --output Conv_198,Conv_217,Conv_236 --data_type FP16

@glenn-jocher
Copy link
Member

@bt5-coder OpenVINO models should work with NCS2 by setting L370 here to MYRIAD:

yolov5/models/common.py

Lines 361 to 374 in 27d831b

elif xml: # OpenVINO
LOGGER.info(f'Loading {w} for OpenVINO inference...')
check_requirements(('openvino',)) # requires openvino-dev: https://pypi.org/project/openvino-dev/
from openvino.runtime import Core
ie = Core()
if not Path(w).is_file(): # if not *.xml
w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir
network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin'))
batch_size = network.batch_size
executable_network = ie.compile_model(network, device_name="CPU") # device_name="MYRIAD" for Intel NCS2
output_layer = next(iter(executable_network.outputs))
meta = Path(w).with_suffix('.yaml')
if meta.exists():
stride, names = self._load_metadata(meta) # load metadata

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

Successfully merging a pull request may close this issue.