Skip to content

Deploy your TensorFlow Lite Model in Android

namizzz edited this page Jul 4, 2018 · 1 revision

TensorFlow Lite is TensorFlow’s lightweight solution for mobile and embedded devices.

TensorFlow Lite is better as:

  • TensorFlow Lite enables on-device machine learning inference with low latency. Hence, it is fast.
  • TensorFlow Lite takes small binary size. Hence, good for mobile devices.
  • TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API.

TensorFlow Lite uses many techniques for achieving low latency such as:

  • Optimizing the kernels for mobile apps.
  • Pre-fused activations.
  • Quantized kernels that allow smaller and faster (fixed-point math) models.

How to use TensorFlow Lite in an Android application?

1. convert the model into .tflite

In order to run the model with the TensorFlow Lite, you will have to convert the model into the model(.tflite) which is accepted by the TensorFlow Lite. Follow the steps from here. There are also a lot of other ways to convert the model into TFLite.

You can check if all your operators are supported in TF Lite here

2. build the jar and .so file.

git clone --recurse-submodules  https://github.com/tensorflow/tensorflow.git

[Note!] --recurse-submodules is important to pull submodules.

Download NDK from here.

Download Android SDK or we can provide the path from Android Studio SDK.

Install Bazel from here. Bazel is the primary build system for TensorFlow.

Now, edit the WORKSPACE, we can find the WORKSPACE file in the root directory of the TensorFlow that we have cloned earlier.

# Uncomment and update the paths in these entries to build the Android demo.
#android_sdk_repository(
#    name = "androidsdk",
#    api_level = 23,
#    build_tools_version = "25.0.1",
#    # Replace with path to Android SDK on your system
#    path = "<PATH_TO_SDK>",
#)
#
#android_ndk_repository(
#    name="androidndk",
#    path="<PATH_TO_NDK>",
#    api_level=14)

Then build the .so file.

bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so 
   --crosstool_top=//external:android/crosstool 
   --host_crosstool_top=@bazel_tools//tools/cpp:toolchain 
   --cpu=armeabi-v7a

Replacing armeabi-v7a with your desired target architecture.

The library will be located at:

bazel-bin/tensorflow/contrib/android/libtensorflow_inference.so

To build the Java counterpart:

bazel build //tensorflow/contrib/android:android_tensorflow_inference_java

We can find the JAR file at:

bazel-bin/tensorflow/contrib/android/libandroid_tensorflow_inference_java.jar

Now we have both jar and .so file. I have already built both .so file and jar, you can directly use from the below project.

3.Edit your app’s gradle file

After copying the tensorflow directory with built libraries from the previous step, you should edit your gradle file to make sure it finds and loads these libraries. The difference should look something like this:

repositories {
    flatDir {
        dirs "tensorflow"
    }
}
dependencies {
    compile(name:'libandroid_tensorflow_inference_java', ext:'jar')
}
android {
    sourceSets {
        jniLibs.srcDirs = ['libs', 'tensorflow/prebuiltLibs']
    }
}

4. Load and run the model in your android’s app code

Now that the tensorflow libraries are ready to be invoked from your app, you can use them as follows:

import org.tensorflow.contrib.android.TensorFlowInferenceInterface;

/** One time initialization: */
TensorFlowInferenceInterface tensorflow = new TensorFlowInferenceInterface();
tensorflow.initializeTensorFlow(getAssets(), "file:///android_asset/model.pb");

/** Continuous inference (floats used in example, can be any primitive): */

// loading new input
tensorflow.fillNodeFloat("input:0", INPUT_SHAPE, input); // INPUT_SHAPE is an int[] of expected shape, input is a float[] with the input data

// running inference for given input and reading output
String outputNode = "output:0";
String[] outputNodes = {outputNode};
tensorflow.runInference(outputNodes);
tensorflow.readNodeFloat(outputNode, output); // output is a preallocated float[] in the size of the expected output vector

5. Tell proguard not to obfuscate TensorFlow

This is something I discovered only when running in release after publishing this post originally.

You’ll want to edit your proguard-project.txt to make sure TensorFlow JNI calls will work well:

-keep class org.tensorflow.** { *; }

That’s it! Now run your app and hope for the best :)


Reference Link:

https://medium.com/joytunes/deploying-a-tensorflow-model-to-android-69d04d1b0cba

https://blog.mindorks.com/android-tensorflow-machine-learning-example-ff0e9b2654cc

https://github.com/amitshekhariitbhu/Android-TensorFlow-Lite-Example