Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to instantiate Interpreter with converted model #74

Closed
zye1996 opened this issue Mar 9, 2020 · 18 comments
Closed

Not able to instantiate Interpreter with converted model #74

zye1996 opened this issue Mar 9, 2020 · 18 comments
Labels
bug Something isn't working runtime

Comments

@zye1996
Copy link

zye1996 commented Mar 9, 2020

Hi,

I complied tflite model with tpu_compiler and then tried to instantiate interpreter for inference. But it fails with:

ValueError: Found too many dimensions in the input array of operation 'reshape'.

Here is my compiled model and compile log:

Edge TPU Compiler version 2.0.291256449

Model compiled successfully in 161 ms.

Input model: retinaface_landmark_320_240_quant.tflite
Input size: 478.66KiB
Output model: retinaface_landmark_320_240_quant_edgetpu.tflite
Output size: 537.74KiB
On-chip memory available for caching model parameters: 7.69MiB
On-chip memory used for caching model parameters: 729.50KiB
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 1
Total number of operations: 90
Operation log: retinaface_landmark_320_240_quant_edgetpu.log

Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 39
Number of operations that will run on CPU: 51

Operator Count Status

CONCATENATION 6 More than one subgraph is not supported
LEAKY_RELU 3 Operation is working on an unsupported data type
QUANTIZE 4 Operation is otherwise supported, but not mapped due to some unspecified limitation
QUANTIZE 3 Mapped to Edge TPU
QUANTIZE 8 More than one subgraph is not supported
PAD 5 Mapped to Edge TPU
RELU 3 More than one subgraph is not supported
CONV_2D 12 More than one subgraph is not supported
CONV_2D 19 Mapped to Edge TPU
DEPTHWISE_CONV_2D 12 Mapped to Edge TPU
RESHAPE 9 More than one subgraph is not supported
DEQUANTIZE 6 Operation is working on an unsupported data type

model.tflite.tar.gz

@Namburger
Copy link

Hello, I reproduced it with this:

from tflite_runtime.interpreter import Interpreter
from tflite_runtime.interpreter import load_delegate
interpreter = Interpreter(
      model_path="./model.tflite",
      experimental_delegates=[load_delegate('libedgetpu.so.1.0')])

Do you have the model before it was compiled?

@zye1996
Copy link
Author

zye1996 commented Mar 9, 2020

Hello, I reproduced it with this:

from tflite_runtime.interpreter import Interpreter
from tflite_runtime.interpreter import load_delegate
interpreter = Interpreter(
      model_path="./model.tflite",
      experimental_delegates=[load_delegate('libedgetpu.so.1.0')])

Do you have the model before it was compiled?

I do and I can comfirm that it works with interpreter. Here it is.
model_before_compile.tflite.zip

@Namburger
Copy link

@zye1996 hi, I discussed this issue with the team and ended up filing an internal bug to get this fix, I'll keep you updated.

@Namburger Namburger added runtime bug Something isn't working labels May 11, 2020
@sheldoncoup
Copy link

Did this ever end up being resolved? Having an identical issue myself.

@Namburger
Copy link

@sheldoncoup apologies, this is still a wip :(

@Namburger
Copy link

@sheldoncoup @zye1996
Just ping the team and it is now being work on, will keep you all updated

@arshren
Copy link

arshren commented Jul 13, 2020

I had a similar issue and was able to resolve it.

My problem was that the representative dataset that I used for post-training quantization had more images than what I had provided in the images folder.
My test_dir had 99 images and I had set the range to 100. When I matched the no. of images in the folder. The issue was resolved

def representative_data_gen():
dataset_list = tf.data.Dataset.list_files(test_dir + '/*')
for i in range(99):
image = next(iter(dataset_list))
image = tf.io.read_file(image)
image = tf.io.decode_jpeg(image, channels=3)
image = tf.image.resize(image, (360,640))
image = tf.cast(image / 255., tf.float32)
image = tf.expand_dims(image, 0)
yield [image]

converter=tf.compat.v1.lite.TFLiteConverter.from_keras_model_file(keras_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

#This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]

These set the input and output tensors to uint8

converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

And this sets the representative dataset so we can quantize the activations

converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()

@Namburger
Copy link

Namburger commented Jul 16, 2020

@arshren Thanks for the report, I'm surprised that tflite conversion allows that to pass in the first place o_0

Any how, we found the issue and fixed it internally although it didn't quite make the cut for the latest release. If you're having this issue, I can compile the model for you @sheldoncoup

@zye1996 here is your model + log:
model_before_compile_edgetpu.tflite.tar.gz
model_before_compile_edgetpu.log

@sheldoncoup
Copy link

@Namburger Glad to hear that the bug has been tracked down. I have a bunch of large (30MB+) models to be converted and have a lot of testing/reconverting to do in the near future, so it wouldn't be a great use of your time to do that for me.
Is there a rough timeline on when it might be available in a future release? Or some type of patch/workaround for the meantime?

@zye1996
Copy link
Author

zye1996 commented Jul 19, 2020

@arshren Thanks for the report, I'm surprised that tflite conversion allows that to pass in the first place o_0

Any how, we found the issue and fixed it internally although it didn't quite make the cut for the latest release. If you're having this issue, I can compile the model for you @sheldoncoup

@zye1996 here is your model + log:
model_before_compile_edgetpu.tflite.tar.gz
model_before_compile_edgetpu.log

Thank you so much!

@vathsan97
Copy link

vathsan97 commented Jul 21, 2020

Hi, @Namburger I'm having the same issue. Could you please help me compile my tflite model as well! Here is the tflite model before and after compiling the model
model_before_compilation_resnet.zip
model_after_compilation_resnet.zip

@Namburger
Copy link

@vathsan97 I need the non edgetpu version before compilation, this one is already compiled

@vathsan97
Copy link

@Namburger Please do find attached the non edgetpu version here
model_before_compilation_resnet.zip

@Namburger
Copy link

@vathsan97
Here we go :)
https://drive.google.com/file/d/1fod-rEXjL-ULjbuyE-3vCiXLxzwIKQgH/view?usp=sharing

@Sri-Butlr
Copy link

Hi @Namburger It would be great if I could get this model converted as well. Could I know when will there be a new release with this bug fixed?
Thanks again!
resnet50_age_gender_2_quant.tflite.zip

@Namburger
Copy link

@BernardinD
Copy link

@Namburger has a fix to this error been released?

@Namburger
Copy link

@BernardinD we are expecting a release in mid q4 which should include this fix!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working runtime
Projects
None yet
Development

No branches or pull requests

7 participants