Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rknn_toolkit_lite2_1.5.0 推理yolov5S 出现错误 #168

Open
momohuangsha opened this issue Jun 1, 2023 · 30 comments
Open

rknn_toolkit_lite2_1.5.0 推理yolov5S 出现错误 #168

momohuangsha opened this issue Jun 1, 2023 · 30 comments

Comments

@momohuangsha
Copy link

momohuangsha commented Jun 1, 2023

I RKNN: [17:49:33.659] RKNN Runtime Information: librknnrt version: 1.5.0 (e6fe0c678@2023-05-25T08:09:20)
I RKNN: [17:49:33.659] RKNN Driver Information: version: 0.8.2
I RKNN: [17:49:34.641] RKNN Model Information: version: 4, toolkit version: 1.5.0+1fa95b5c(compiler version: 1.5.0 (e6fe0c678@2023-05-25T08:11:09)), target: RKNPU lite, target platform: rk3566, framework name: ONNX, framework layout: NCHW, model inference type: static_shape
W RKNN: [17:49:38.083] Output(output): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [17:49:38.084] Output(949): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [17:49:38.084] Output(950): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.

@momohuangsha
Copy link
Author

导出onnx 正常 在1.4.0 下推理也正常

@tylertroy
Copy link

I'm also seeing this warning moving from version 1.4 to 1.5. It's possible the because I'm using a moel exported from the old version that it's breaking. I'll try to export a model with the new version and see what happens. In the meantime, any support from RockChip on this issue would be appreciated.

@hlacikd
Copy link

hlacikd commented Jun 13, 2023

@tylertroy did you managed to solve it? i am experiencing same issue using rknn-toolkit2 v 1.5 with yolov5s (even with their example) as well as with other models like yolo-nas

@1194949000
Copy link

我也遇到了同样的问题,用的是最新版的rknn2toolkit=1.5.0导出的rknn模型,在pc上用模拟器可以正常推理,但在3588板子上却出现一样的警告

@momohuangsha
Copy link
Author

@Caesar-github 请问 什么原因呢

@knight-L
Copy link

knight-L commented Jul 4, 2023

我也遇到这个问题了,退回1.4.0没问题,1.5.0就有问题

@wycrystal
Copy link

我也遇到了这个问题

@laohu-one
Copy link

我当时安装的时候也想着用1.4.0但是没找到,就用了1.5.0 。。。。。。

@tongxiaohua
Copy link

一样的问题,解决不了,找不到1.4.0的版本

@yueyueshine
Copy link

那怎么在终端,屏蔽这些输出呢

@lucaske21
Copy link

Same here,

Device: firefly ROC-RK3588S
OS: ubuntu 20.04
Python: 3.8
rknn_toolkit_lite_2: rknn_toolkit_lite2-1.5.0-cp38-cp38-linux_aarch64.whl

W RKNN: [14:12:18.886] Output(boxes): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [14:12:18.886] Output(confs): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [14:12:18.928] Output(boxes): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [14:12:18.928] Output(confs): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [14:12:18.970] Output(boxes): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [14:12:18.970] Output(confs): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.

@hlacikd
Copy link

hlacikd commented Aug 24, 2023

Yes I guess we have to wait for miracle to happen , bugreporting is not a Priority for guys from rockchip , well they don't even use github its just here for us outside of china to grab it

@kcembrey
Copy link

@laohu-one @tongxiaohua, you can download the wheel for 1.4.0 in their v1.4.0 branch here: https://github.com/rockchip-linux/rknn-toolkit2/tree/v1.4.0

For some reason they have Python 3.6 and 3.8 in the root "packages" directory, but 3.7 and 3.9 in the rknn_toolkit_lite2 directory.

Unfortunately, this didn't resolve the issue for me. Same error on both versions, but it could be my model that's the issue.

@hlacikd
Copy link

hlacikd commented Aug 24, 2023

Unfortunately, this didn't resolve the issue for me. Same error on both versions, but it could be my model that's the issue.

Hi issue itself is not in python wheel package but in rknpu2.so library .
You can actually use rknn lite 2 v 1.5 (which has python 3.8 and 3.10) but you have to use rknn toolkit 1.4 as well as rknpu2 so version 1.4 , this is how i do it now until someone will fix it

@memo26167
Copy link

Hi, rknn-toolkit2-1.5.2 is on Baidu, but I can't make an account and can't download it.
Will this version solve this problem? Could someone facilitate me a download link?. Thanks in advance

@hlacikd
Copy link

hlacikd commented Aug 26, 2023

Hi, rknn-toolkit2-1.5.2 is on Baidu, but I can't make an account and can't download it.
Will this version solve this problem? Could someone facilitate me a download link?. Thanks in advance

I would love that too , you are unable to create your account if you are not in China

@dlavrantonis
Copy link

looks like rknn-toolkit2(lite)-1.5.2 was uploaded here one hour ago.

@memo26167
Copy link

I think this solves it. The warning still appears but my specific problem is solved.

@Simzie
Copy link

Simzie commented Aug 29, 2023

I installed this yesterday, and RKNPU2 the day before. I have this same issue (terminal polluted with warnings, but otherwise working) with RKNPU2 1.5.2 and toolkit lite 1.5.2 but note that when I init the runtime it tells me:

I RKNN: [21:53:07.875] RKNN Runtime Information: librknnrt version: 1.5.2 (c6b7b351a@2023-08-23T15:28:22)
I RKNN: [21:53:07.875] RKNN Driver Information: version: 0.8.2
**W RKNN: [21:53:07.875] Current driver version: 0.8.2, recommend to upgrade the driver to the new version: >= 0.8.8**
I RKNN: [21:53:07.875] RKNN Model Information: version: 6, toolkit version: 1.5.2-source_code(compiler version: 1.5.2 (71720f3fc@2023-08-21T01:31:57)), target: RKNPU v2, target platform: rk3588, framework name: ONNX, framework layout: NCHW, model inference type: static_shape

Looks like my driver is out of date. If others are seeing this work now can you confirm what driver you have? Does anyone know how to update this? Looks like we have to provide our own support as rockchip are ghosting us.

@Simzie
Copy link

Simzie commented Aug 30, 2023

In the interim, if someone knows of a method to suppress this warning, that would be appreciated. I've tried with redirect_stdout and redirect_stderr but they failed for me.

@Simzie
Copy link

Simzie commented Sep 1, 2023

If anyone is interested I was able to suppress the output by running the inference (and a gstreamer appsink to supply images) as a worker thread, using:

`import multiprocessing
from npu_worker import worker

def suppressed_worker(pipe):
devnull = os.open(os.devnull, os.O_WRONLY)
os.dup2(devnull, 1)
os.dup2(devnull, 2)
os.close(devnull)
worker(pipe)

if name == "main":
parent_conn, child_conn = multiprocessing.Pipe()
p = multiprocessing.Process(target=suppressed_worker, args=(child_conn,))
p.daemon = True`

At least this way I'm able to have some meaningful output on the terminal. Not a perfect solution as if there are any errors in the worker thread then you have to turn off the suppression to see what they are, but for me at least this is better than the continuous stream of output from rknn_lite.inference at nearly 30fps. Would be great if rockchip could make this more user friendly or if others new better methods.

@memo26167
Copy link

memo26167 commented Sep 1, 2023

If anyone is interested I was able to suppress the output by running the inference (and a gstreamer appsink to supply images) as a worker thread, using:

`import multiprocessing from npu_worker import worker

def suppressed_worker(pipe): devnull = os.open(os.devnull, os.O_WRONLY) os.dup2(devnull, 1) os.dup2(devnull, 2) os.close(devnull) worker(pipe)

if name == "main": parent_conn, child_conn = multiprocessing.Pipe() p = multiprocessing.Process(target=suppressed_worker, args=(child_conn,)) p.daemon = True`

At least this way I'm able to have some meaningful output on the terminal. Not a perfect solution as if there are any errors in the worker thread then you have to turn off the suppression to see what they are, but for me at least this is better than the continuous stream of output from rknn_lite.inference at nearly 30fps. Would be great if rockchip could make this more user friendly or if others new better methods.

I have found another way to hide warnings, using a package specifically for this:

from hide_warnings import hide_warnings
@hide_warnings
def inference(image):
    return rknn_interface.inference(inputs=[image])
result = inference(image)

Edit: code style

@hlacikd
Copy link

hlacikd commented Sep 1, 2023 via email

@memo26167
Copy link

memo26167 commented Sep 1, 2023 via email

@Simzie
Copy link

Simzie commented Sep 4, 2023

Thanks @memo26167, hide_warnings works a treat! I'd tried warnings and context_lib but couldn't get them working.

@tylertroy
Copy link

tylertroy commented Sep 14, 2023

@tylertroy did you managed to solve it? i am experiencing same issue using rknn-toolkit2 v 1.5 with yolov5s (even with their example) as well as with other models like yolo-nas

@hlacikd, I successfully converted and run this model on the rk3588 using the latest version (1.5.2+b642f30c) of the SDK, as reported by others. Be sure to use the same python and SDK version for both the export and inference stages. Let me know how you go.

@Pol22
Copy link

Pol22 commented Nov 1, 2023

Looking at the descriptions of functions in rknn_toolkit_lite2 package, I suddenly noticed that the only supports inputs in NHWC format on rknn_lite.inference(...)
It looks strange as the original ONNX model was in NCHW format and it seems that nothing changed during the conversion process to RKNN model, but I tried to run it with a transposed input in NHWC format and it works.
Warnings still presents but outputs seems correct.
Check it and let me know!

@phker
Copy link

phker commented Jan 3, 2024

Hi, rknn-toolkit2-1.5.2 is on Baidu, but I can't make an account and can't download it. Will this version solve this problem? Could someone facilitate me a download link?. Thanks in advance

you can call me emaill 273082449@qq.com

@Kusunoki0130
Copy link

I also see this problem. You can try to use librknnrt.so 1.6.0 in their new repository.
Just replace the old verison librknnrt.so in /usr/lib.

@tylertroy
Copy link

@hlacikd To export the model properly you need to remove the tail and replace the final DFL stage with CPU torch code.
I have summarized the process in this comment. For use with python you should do the first two steps in that comment. Namely, follow the .pt to .onnx export method then use the code found here to convert to .onnx to .rknn using convert.py and the code in yolov8.py for running and postprocessing on device. Using this method you will get results much closer to the model's output when run on torch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests