Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mobilenetv3 tflite model input image size 320x320 crash in android #7733

Open
WestbrookZero opened this issue Oct 31, 2019 · 4 comments
Open

Comments

@WestbrookZero
Copy link

I used tensorflow object detection api to train MobileNetV3,https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssdlite_mobilenet_v3_large_320x320_coco.config, Android Project At https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android ,but crash, my input image size is 320. because ssdlite_mobilenet_v3_large_320x320_coco.config , default is 320, I used mobilenetv2 input image size is 300 ,and is right not crash , I don't how to modify. android log error: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x20 in tid 5931 (inference)

@tensorflowbutler tensorflowbutler added the stat:awaiting response Waiting on input from the contributor label Nov 1, 2019
@tensorflowbutler
Copy link
Member

Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks.
What is the top-level directory of the model you are using
Have I written custom code
OS Platform and Distribution
TensorFlow installed from
TensorFlow version
Bazel version
CUDA/cuDNN version
GPU model and memory
Exact command to reproduce

@Arylu
Copy link

Arylu commented Nov 14, 2019

I also met

@Tanmay-Kulkarni101
Copy link

There are two things you should have accounted for

  // Configuration values for the prepackaged SSD model.
  private static final int TF_OD_API_INPUT_SIZE = 320;
  private static final boolean TF_OD_API_IS_QUANTIZED = false;
  private static final String TF_OD_API_MODEL_FILE = "detect.tflite";
  private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/labelmap.txt";
  private static final DetectorMode MODE = DetectorMode.TF_OD_API;

The details, with respect to the inputs. The input has to be 320x320.

Second, is to change the code below

      recognitions.add(
          new Recognition(
              "" + i,
              labels.get((int) outputClasses[0][i] + labelOffset),
              outputScores[0][i],
             detection));

With the following code-

        final int classLabel = (int) outputClasses[0][i] + labelOffset;
        if (inRange(classLabel, labels.size(), 0) && inRange(outputScores[0][i], 1, 0)) {
            recognitions.add(
                    new Recognition(
                            "" + i,
                            labels.get(classLabel),
                            outputScores[0][i],
                            detection));
        }
    }
    Trace.endSection(); // "recognizeImage"
    return recognitions;
  }



private boolean inRange(float number, float max, float min) {
    return number < max && number >= min;
}

I hope this helps.

@WestbrookZero
Copy link
Author

WestbrookZero commented Nov 18, 2019

@Tanmay-Kulkarni101 Thanks for help , but it is still crash. Log is below :
/data/app/com.pateonavi.naviapp-3W3Q14BCf_a4RzFYXbDtAg==/lib/arm/libtensorflowlite_jni.so
/data/app/com.pateonavi.naviapp-3W3Q14BCf_a4RzFYXbDtAg==/lib/arm/libtensorflowlite_jni.so
/data/app/com.pateonavi.naviapp-3W3Q14BCf_a4RzFYXbDtAg==/lib/arm/libtensorflowlite_jni.so
/data/app/com.pateonavi.naviapp-3W3Q14BCf_a4RzFYXbDtAg==/lib/arm/libtensorflowlite_jni.so
/data/app/com.pateonavi.naviapp-3W3Q14BCf_a4RzFYXbDtAg==/lib/arm/libtensorflowlite_jni.so (Java_org_tensorflow_lite_NativeInterpreterWrapper_run+26)
/data/app/com.pateonavi.naviapp-3W3Q14BCf_a4RzFYXbDtAg==/oat/arm/base.odex (offset 0x4f000) (org.tensorflow.lite.NativeInterpreterWrapper.run+120)
A/DEBUG: #6 pc 00417d75 /system/lib/libart.so (art_quick_invoke_stub_internal+68)
A/DEBUG: #7 pc 003f14bb /system/lib/libart.so (art_quick_invoke_static_stub+222)
A/DEBUG: #8 pc 000a1043 /system/lib/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+154)
A/DEBUG: #9 pc 001e890d /system/lib/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+232)
A/DEBUG: #10 pc 001e35e9 /system/lib/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+776)
A/DEBUG: #11 pc 003ecfbb /system/lib/libart.so (MterpInvokeStatic+130)
A/DEBUG: #12 pc 0040ac94 /system/lib/libart.so (ExecuteMterpImpl+14612)
A/DEBUG: #13 pc 00798b90 /data/app/com.pateonavi.naviapp-3W3Q14BCf_a4RzFYXbDtAg==/oat/arm/base.vdex (org.tensorflow.lite.NativeInterpreterWrapper.run+200)
A/DEBUG: #14 pc 001c7f61 /system/lib/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.2696875303+352)
A/DEBUG: #15 pc 001cc82f /system/lib/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+146)
A/DEBUG: #16 pc 001e35d3 /system/lib/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+754)
A/DEBUG: #17 pc 003ebfdf /system/lib/libart.so (MterpInvokeVirtual+442)
A/DEBUG: #18 pc 0040ab14 /system/lib/libart.so (ExecuteMterpImpl+14228)
A/DEBUG: #19 pc 007983a4 /data/app/com.pateonavi.naviapp-3W3Q14BCf_a4RzFYXbDtAg==/oat/arm/base.vdex (org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs+10)
A/DEBUG: #20 pc 001c7f61 /system/lib/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.2696875303+352)
A/DEBUG: #21 pc 001cc77b /system/lib/libart.so (art::interpreter::EnterInterpreterFromEntryPoint(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*)+82)
A/DEBUG: #22 pc 003df823 /system/lib/libart.so (artQuickToInterpreterBridge+890)
A/DEBUG: #23 pc 0041c2ff /system/lib/libart.so (art_quick_to_interpreter_bridge+30)
A/DEBUG: #24 pc 0007e103 /dev/ashmem/dalvik-jit-code-cache (deleted) (com.tfdetection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage+1426)
A/DEBUG: #25 pc 00417dbb /system/lib/libart.so (art_quick_osr_stub+42)
A/DEBUG: #26 pc 0024eb11 /system/lib/libart.so (art::jit::Jit::MaybeDoOnStackReplacement(art::Thread*, art::ArtMethod*, unsigned int, int, art::JValue*)+1464)
A/DEBUG: #27 pc 003f09b7 /system/lib/libart.so (MterpMaybeDoOnStackReplacement+86)
A/DEBUG: #28 pc 004175f4 /system/lib/libart.so (ExecuteMterpImpl+66164)
A/DEBUG: #29 pc 00436096 /data/app/com.pateonavi.naviapp-3W3Q14BCf_a4RzFYXbDtAg==/oat/arm/base.vdex (com.tfdetection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage+226)
A/DEBUG: #30 pc 001c7f61 /system/lib/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.2696875303+352)
A/DEBUG: #31 pc 001cc82f /system/lib/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+146)
A/DEBUG: #32 pc 001e35d3 /system/lib/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+754)
A/DEBUG: #33 pc 003ecbad /system/lib/libart.so (MterpInvokeInterface+1020)
A/DEBUG: #34 pc 0040ad14 /system/lib/libart.so (ExecuteMterpImpl+14740)
A/DEBUG: #35 pc 00426690 /data/app/com.pateonavi.naviapp-3W3Q14BCf_a4RzFYXbDtAg==/oat/arm/base.vdex (com.tfdetection.TFDetector.processImage+176)

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Waiting on input from the contributor label Nov 19, 2019
@ravikyram ravikyram added the models:research models that come under research directory label Jun 19, 2020
@jaeyounkim jaeyounkim added models:research:odapi ODAPI and removed models:research models that come under research directory labels Jun 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants