Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal compiler error. Aborting! with edgetpu_compiler #419

Closed
akanksharma opened this issue Jul 14, 2021 · 9 comments
Closed

Internal compiler error. Aborting! with edgetpu_compiler #419

akanksharma opened this issue Jul 14, 2021 · 9 comments
Assignees
Labels

Comments

@akanksharma
Copy link

We are getting the below mentioned error with one of our quant models created with TF2.4.

# edgetpu_compiler ufs_128_quant.tflite
Edge TPU Compiler version 14.1.317412892

Internal compiler error. Aborting! 

We have tried to reduce the model size to around 5 MB only but the issue still exists.
We are however able to compile all other models except this one.
Unfortunately there are no logs available.

Any help to solve this issues is appreciated.

There is one more similar issue created about this but not sure if the actual cause is same.
#189

Thanks in advance.

@hjonnala
Copy link
Contributor

please try using new compiler version. If it won't work, please share the ufs_128_quant.tflite model.

@hjonnala hjonnala self-assigned this Jul 14, 2021
@akanksharma
Copy link
Author

akanksharma commented Jul 14, 2021

I tried to upgrade but still I am getting the same error

ufs_128_quant.tflite.zip

# edgetpu_compiler ufs_128_quant.tflite
Edge TPU Compiler version 15.0.340273435

Internal compiler error. Aborting!

Attached the quant model here.

@hjonnala
Copy link
Contributor

The compiler is failing as there are too many of transpose ops which are not supported.

You can use https://netron.app/ to visualize the models.

please check all supported operations here: https://coral.ai/docs/edgetpu/models-intro/#supported-operations

@letdivedeep
Copy link

letdivedeep commented Jul 16, 2021

Hi @hjonnala

I am trying to convert PyTorch model to tflite using this pipeline :
Pytorch -> onnx -> saved_model -> tflite

As we know Pytorch format is (NCHW) and the PB from is (NHWC), so when we convert using onnx2tf it adds lots of transpose to handle this

do we have an alternative way to avoid it and make it run on edge tpu

@hjonnala
Copy link
Contributor

@letdivedeep switching the model architecture might resolve the issue.
Transpose operation could be supported be in future releases.

@hjonnala
Copy link
Contributor

@ akanksharma are you able to modify the model that can compiled with edgetpu compiler?

@letdivedeep
Copy link

letdivedeep commented Jul 30, 2021

@hjonnala

we were able to resolve the issue by having a workaround with openvino2tensorflow :

Pytorch -> onnx -> openViNO -> edge_tpu

https://github.com/PINTO0309/openvino2tensorflow

blog: https://qiita.com/PINTO/items/ed06e03eb5c007c2e102

@hjonnala
Copy link
Contributor

@letdivedeep Thanks for sharing the workaround links.

@taloot
Copy link

taloot commented Aug 2, 2021

@hjonnala

we were able to resolve the issue by having a workaround with openvino2tensorflow :

Pytorch -> onnx -> openViNO -> edge_tpu

https://github.com/PINTO0309/openvino2tensorflow

blog: https://qiita.com/PINTO/items/ed06e03eb5c007c2e102

i try it but didnt work it gave me error
it give this error
line 520, in _quantize
return _mlir_quantize(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py", line 218, in wrapper
raise error from None # Re-throws the exception.
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py", line 208, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert.py", line 236, in mlir_quantize
return wrap_toco.wrapped_experimental_mlir_quantize(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/wrap_toco.py", line 47, in wrapped_experimental_mlir_quantize
return _pywrap_toco_api.ExperimentalMlirQuantizeModel(
RuntimeError: Failed to quantize:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants