0

I am trying to implement object detection using MobileNetV2 model on Flutter. Since, most of the examples or implementation available online for Flutter app is not using MobileNetV2, so I took a long route to reach to that phase.

The way I achieved this is as follows:

1) Created a python script where I am using MobileNetV2 model (pre-trained on ImageNet for 1000 classes) of Keras (backend Tensorflow) and tested it with images to see if it is returning the correct labels after detecting objects correctly. [Python script provided below for reference]

2) Converted the same MobileNetV2 keras model (MobileNetV2.h5) to Tensorflow Lite model (MobileNetV2.tflite)

3) Followed the existing example of creating Flutter app to use Tensorflow Lite (https://itnext.io/working-with-tensorflow-lite-in-flutter-f00d733a09c3). Replaced the TFLite model shown in the example with the MobileNetV2.tflite model and used the ImageNet classes/labels in https://gist.github.com/aaronpolhamus/964a4411c0906315deb9f4a3723aac57 as the labels.txt. [GitHub project of the Flutter example is provided here: https://github.com/umair13adil/tensorflow_lite_flutter]

When I now run the Flutter app, it is running without any error, however during classification/predicting the label the output is not correct. For example: It classifies an orange (object id: n07747607) as poncho (object id: n03980874), and classifies pomegranate (object id: n07768694) as banded_gecko (object id: n01675722).

However, if I use the same pictures and test it with my python script, it is returning the correct labels. So, I was wondering if the issue is actually with the label.txt (which holds the labels) used in the Flutter app, where the order of the labels is not matching the inference of the model.

Can anyone mention that how I can resolve the issue to classify the correct objects? How can I get the ImageNet labels that are used by the MobileNetV2 (keras) so that I can use that in the Flutter app?

My Flutter App to detect object using MobileNetv2 can be downloaded from here: https://github.com/somdipdey/Tensorflow_Lite_Object_Detection_Flutter_App

My python script to convert the MobileNetV2 model (keras) to TFLite while testing it on image for classification as follows:

import tensorflow as tf
from tensorflow import keras

from keras.preprocessing import image
from keras.applications.mobilenet_v2 import preprocess_input, decode_predictions
import numpy as np

import PIL
from PIL import Image
import requests
from io import BytesIO


# load the model
model = tf.keras.applications.MobileNetV2(weights='imagenet', include_top=True)

#model = tf.keras.models.load_model('MobileNetV2.h5')

# To save model
model.save('MobileNetV2.h5')

# chose the URL image that you want
URL = "https://images.unsplash.com/photo-1557800636-894a64c1696f?ixlib=rb-1.2.1&w=1000&q=80"
# get the image
response = requests.get(URL)
img = Image.open(BytesIO(response.content))
# resize the image according to each model (see documentation of each model)
img = img.resize((224, 224))

##############################################
# if you want to read the image from your PC
#############################################
# img_path = 'myimage.jpg'
# img = image.load_img(img_path, target_size=(299, 299))
#############################################



# convert to numpy array
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

features = model.predict(x)

# return the top 10 detected objects
num_top = 10
labels = decode_predictions(features, top=num_top)
print(labels)

#load keras model
new_model= tf.keras.models.load_model(filepath="MobileNetV2.h5")
# Create a converter # I could also directly use keras model instead of loading it again
converter = tf.lite.TFLiteConverter.from_keras_model(new_model)
# Convert the model
tflite_model = converter.convert()
# Create the tflite model file
tflite_model_name = "MobileNetV2.tflite"
open(tflite_model_name, "wb").write(tflite_model)

1 Answer 1

0

Let me start by sharing the ImageNet labels in two formats, JSON and txt. Given the fact MobileNetV2 is trained on ImageNet, it should be returning results based on these labels.

My initial thought is that there must be an error with the 2nd step of your pipeline. I assume you are trying to convert the trained Keras-based weights to Tensorflow Lite weights (is it the same format with pure Tensorflow?). A good option is to try and find already saved weights in the format of Tensorflow Lite ( but I guess they might not be available and that's why you are doing the conversion). I had similar problems with converting TF weights to Keras so you must be sure whether the conversion was successfully done before even going to step 3, creation of Flutter app to use Tensorflow Lite. A good way to achieve this is by printing all the available classes of your classifier and compare them with the original ImageNet labels given above.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.