-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rknn_toolkit_lite2_1.5.0 推理yolov5S 出现错误 #168
Comments
导出onnx 正常 在1.4.0 下推理也正常 |
I'm also seeing this warning moving from version 1.4 to 1.5. It's possible the because I'm using a moel exported from the old version that it's breaking. I'll try to export a model with the new version and see what happens. In the meantime, any support from RockChip on this issue would be appreciated. |
@tylertroy did you managed to solve it? i am experiencing same issue using rknn-toolkit2 v 1.5 with yolov5s (even with their example) as well as with other models like yolo-nas |
我也遇到了同样的问题,用的是最新版的rknn2toolkit=1.5.0导出的rknn模型,在pc上用模拟器可以正常推理,但在3588板子上却出现一样的警告 |
@Caesar-github 请问 什么原因呢 |
我也遇到这个问题了,退回1.4.0没问题,1.5.0就有问题 |
我也遇到了这个问题 |
我当时安装的时候也想着用1.4.0但是没找到,就用了1.5.0 。。。。。。 |
一样的问题,解决不了,找不到1.4.0的版本 |
那怎么在终端,屏蔽这些输出呢 |
Same here, Device: firefly ROC-RK3588S W RKNN: [14:12:18.886] Output(boxes): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [14:12:18.886] Output(confs): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [14:12:18.928] Output(boxes): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [14:12:18.928] Output(confs): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [14:12:18.970] Output(boxes): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [14:12:18.970] Output(confs): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride. |
Yes I guess we have to wait for miracle to happen , bugreporting is not a Priority for guys from rockchip , well they don't even use github its just here for us outside of china to grab it |
@laohu-one @tongxiaohua, you can download the wheel for 1.4.0 in their v1.4.0 branch here: https://github.com/rockchip-linux/rknn-toolkit2/tree/v1.4.0 For some reason they have Python 3.6 and 3.8 in the root "packages" directory, but 3.7 and 3.9 in the rknn_toolkit_lite2 directory. Unfortunately, this didn't resolve the issue for me. Same error on both versions, but it could be my model that's the issue. |
Hi issue itself is not in python wheel package but in rknpu2.so library . |
Hi, rknn-toolkit2-1.5.2 is on Baidu, but I can't make an account and can't download it. |
I would love that too , you are unable to create your account if you are not in China |
looks like rknn-toolkit2(lite)-1.5.2 was uploaded here one hour ago. |
I think this solves it. The warning still appears but my specific problem is solved. |
I installed this yesterday, and RKNPU2 the day before. I have this same issue (terminal polluted with warnings, but otherwise working) with RKNPU2 1.5.2 and toolkit lite 1.5.2 but note that when I init the runtime it tells me:
Looks like my driver is out of date. If others are seeing this work now can you confirm what driver you have? Does anyone know how to update this? Looks like we have to provide our own support as rockchip are ghosting us. |
In the interim, if someone knows of a method to suppress this warning, that would be appreciated. I've tried with redirect_stdout and redirect_stderr but they failed for me. |
If anyone is interested I was able to suppress the output by running the inference (and a gstreamer appsink to supply images) as a worker thread, using: `import multiprocessing def suppressed_worker(pipe): if name == "main": At least this way I'm able to have some meaningful output on the terminal. Not a perfect solution as if there are any errors in the worker thread then you have to turn off the suppression to see what they are, but for me at least this is better than the continuous stream of output from rknn_lite.inference at nearly 30fps. Would be great if rockchip could make this more user friendly or if others new better methods. |
I have found another way to hide warnings, using a package specifically for this: from hide_warnings import hide_warnings
@hide_warnings
def inference(image):
return rknn_interface.inference(inputs=[image])
result = inference(image) Edit: code style |
Awesome! Have you Tried order version 1.4 ? I am using it as it has no
warnings and performance is equal to 1.5
…On Fri, Sep 1, 2023, 20:03 Guillermo Fonseca Kuvacic < ***@***.***> wrote:
If anyone is interested I was able to suppress the output by running the
inference (and a gstreamer appsink to supply images) as a worker thread,
using:
`import multiprocessing from npu_worker import worker
def suppressed_worker(pipe): devnull = os.open(os.devnull, os.O_WRONLY)
os.dup2(devnull, 1) os.dup2(devnull, 2) os.close(devnull) worker(pipe)
if *name* == "*main*": parent_conn, child_conn = multiprocessing.Pipe() p
= multiprocessing.Process(target=suppressed_worker, args=(child_conn,))
p.daemon = True`
At least this way I'm able to have some meaningful output on the terminal.
Not a perfect solution as if there are any errors in the worker thread then
you have to turn off the suppression to see what they are, but for me at
least this is better than the continuous stream of output from
rknn_lite.inference at nearly 30fps. Would be great if rockchip could make
this more user friendly or if others new better methods.
I have found another way to hide warnings, using a package specifically
for this:
`from hide_warnings import hide_warnings
@hide_warnings
def inference(image):
return rknn_interface.inference(inputs=[image])
result = inference(image)
`
—
Reply to this email directly, view it on GitHub
<#168 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABY4RKXA23R45IEJ3RNFHVLXYIPNXANCNFSM6AAAAAAYWVB7KI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Yes but with that version I have problems when trying to export YOLOv8-pose.
With 1.5.0 I also faced problems, but those were solved with 1.5.2, but I
have to test the model further.
… Awesome! Have you Tried order version 1.4 ? I am using it as it has no
warnings and performance is equal to 1.5
On Fri, Sep 1, 2023, 20:03 Guillermo Fonseca Kuvacic <
***@***.***> wrote:
> If anyone is interested I was able to suppress the output by running the
> inference (and a gstreamer appsink to supply images) as a worker thread,
> using:
>
> `import multiprocessing from npu_worker import worker
>
> def suppressed_worker(pipe): devnull = os.open(os.devnull, os.O_WRONLY)
> os.dup2(devnull, 1) os.dup2(devnull, 2) os.close(devnull) worker(pipe)
>
> if *name* == "*main*": parent_conn, child_conn = multiprocessing.Pipe()
p
> = multiprocessing.Process(target=suppressed_worker, args=(child_conn,))
> p.daemon = True`
>
> At least this way I'm able to have some meaningful output on the
terminal.
> Not a perfect solution as if there are any errors in the worker thread
then
> you have to turn off the suppression to see what they are, but for me at
> least this is better than the continuous stream of output from
> rknn_lite.inference at nearly 30fps. Would be great if rockchip could
make
> this more user friendly or if others new better methods.
>
> I have found another way to hide warnings, using a package specifically
> for this:
>
> `from hide_warnings import hide_warnings
>
> @hide_warnings
> def inference(image):
> return rknn_interface.inference(inputs=[image])
> result = inference(image)
> `
>
> —
> Reply to this email directly, view it on GitHub
> <
#168 (comment)>,
> or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/ABY4RKXA23R45IEJ3RNFHVLXYIPNXANCNFSM6AAAAAAYWVB7KI>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
—
Reply to this email directly, view it on GitHub
<#168 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALW2ULJQOH4MIMEZVQE43DLXYIYLXANCNFSM6AAAAAAYWVB7KI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Thanks @memo26167, hide_warnings works a treat! I'd tried warnings and context_lib but couldn't get them working. |
@hlacikd, I successfully converted and run this model on the rk3588 using the latest version (1.5.2+b642f30c) of the SDK, as reported by others. Be sure to use the same python and SDK version for both the export and inference stages. Let me know how you go. |
Looking at the descriptions of functions in rknn_toolkit_lite2 package, I suddenly noticed that the only supports inputs in NHWC format on |
you can call me emaill 273082449@qq.com |
I also see this problem. You can try to use librknnrt.so 1.6.0 in their new repository. |
@hlacikd To export the model properly you need to remove the tail and replace the final DFL stage with CPU torch code. |
I RKNN: [17:49:33.659] RKNN Runtime Information: librknnrt version: 1.5.0 (e6fe0c678@2023-05-25T08:09:20)
I RKNN: [17:49:33.659] RKNN Driver Information: version: 0.8.2
I RKNN: [17:49:34.641] RKNN Model Information: version: 4, toolkit version: 1.5.0+1fa95b5c(compiler version: 1.5.0 (e6fe0c678@2023-05-25T08:11:09)), target: RKNPU lite, target platform: rk3566, framework name: ONNX, framework layout: NCHW, model inference type: static_shape
W RKNN: [17:49:38.083] Output(output): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [17:49:38.084] Output(949): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
W RKNN: [17:49:38.084] Output(950): size_with_stride larger than model origin size, if need run OutputOperator in NPU, please call rknn_create_memory using size_with_stride.
The text was updated successfully, but these errors were encountered: