Porting Mobilenet_V3 Model for Handwritten Digit Recognition on OKMX8MP Linux 5.4.70

This article provides a detailed guide on successfully porting the Mobilenet_V3 model for handwritten digit recognition on the OKMX8MP development board under the Linux 5.4.70 environment. Through specific steps—including dataset import, model training, validation, and conversion to TensorFlow Lite format—this tutorial will help you efficiently implement image recognition tasks on embedded systems.

Copyright

All depictions of the eIQ® Toolkit, eIQ® Portal interface, and related trademarks appearing herein are the property of NXP B.V.

TensorFlow™ is a trademark owned by Google LLC.

This tutorial is intended solely for technical exchange and to demonstrate the AI inference capabilities of the OKMX8MP development board.


1. Import Dataset

Before model training, the dataset must be prepared. If no ready-to-use dataset is available, one may select *Import dataset* to choose from the tool's built-in example datasets. If a custom dataset is prepared, select *Create blank project* and import it directly, as shown in the figure below.

eIQ Portal startup screen showing options to Import Dataset or Create Blank Project

The dataset used in this article comes from the tool and is loaded via TensorFlow, as shown below.

From the dropdown menu in the top-left corner, you can select from various common TensorFlow datasets. This article uses the MNIST dataset, which includes 60,000 handwritten digits as the training set and 10,000 handwritten digits as the validation set. Furthermore, the tool provides three other optional datasets:

  • cifar10: Color images of 10 classes, with 50,000 images as the training set and 10,000 images as the validation set.
  • horses_or_humans: 2 classes—humans and horses. There are 1,027 human images and 1,027 horse images.
  • tf_flowers: 5 classes, totaling 3,670 images of flowers.

Dataset selection dropdown menu featuring MNIST and CIFAR-10 options

In the top-left corner, there is also a Problem type dropdown menu for selecting the task type. The current version of the tool offers two types: Image Classification and Object Detection. For object detection tasks, only the coco/2017 dataset is currently provided, capable of detecting 80 types of objects, with 118,287 images as the training set, 5,000 images as the validation set, and 20,288 images as the test set.

Problem type selection interface showing Image Classification and Object Detection options

After selecting the dataset, click the IMPORT button, choose a save directory, and wait for the import to complete, as illustrated below:

Importing dataset progress bar in eIQ Portal

Select Save Directory

File explorer window for choosing the project save location

Waiting for Dataset Import

Visual representation of the data loading process

Once imported, you can view the MNIST dataset: the left panel displays category and quantity statistics for each image, while the right panel shows specific images from the dataset. Selecting an image allows you to view its details in the section below.

eIQ Portal dataset viewer showing MNIST handwritten digit samples and class distribution


2. Model Training

After importing the dataset, the next step is to select a model. As shown in the figure below, click the SELECT MODEL button.

Navigation to the Select Model stage in eIQ Portal

The model selection interface is shown below. The left side of this interface presents three different options with the following functions:

  • RESTORE MODEL: Load the model used in the previous session.
  • BASE MODELS: Select from the foundational models provided by the tool.
  • Choose models created or imported by the user.

    The right side of the interface displays models for different task types, such as classification models, image segmentation models, and object detection models.

Model selection dashboard with Restore, Base, and User model categories

This article uses the foundational model provided by eIQ Portal, therefore BASE MODELS is selected, as shown in the figure below.

Selecting the Base Models category for pre-configured architectures

The following figure shows several foundational models provided by the tool. This article employs the lightweight mobilenet_v3 model. The detailed architecture of different models can be viewed using the MODEL TOOL function on the tool's main interface.

List of available base models including MobileNetV3 for image classification

After selecting the model, the process proceeds to the training stage, with the interface shown below. The left side of the interface displays the key hyper parameters adjustable during training, including learning rate, batch size, and number of epochs. You can adjust these based on task requirements and hardware constraints. The right side of the interface is used to display relevant information during training, such as curves tracking the model's accuracy and loss values.

Training configuration interface with hyperparameter settings and visualization charts

The parameter configuration used for this training is shown below. After configuration, click ''Start Training''.

Specific training hyperparameters: learning rate 0.001, batch size 32, epochs 10

The training process is as follows. You can intuitively view the model's accuracy and loss value on the right side.

Training progress chart showing increasing accuracy over steps

Training progress chart showing decreasing loss over steps

The model training is completed as shown in the figure below. You can set different ranges to view the steps information.

Final training summary interface indicating successful completion


3. Model Validation

After model training is completed, the model needs to be validated. Select VALIDATE to enter the model validation phase. As shown in the figure below.

Navigating to the Validate tab to evaluate the trained model

The validation interface also requires setting parameters, including Softmax Threshold and some quantization parameters.

Validation settings panel in eIQ Portal

The parameters set in this document are as follows. After configuration, click Validate as shown below.

Configuration for validation including quantization and threshold settings

Then, the interface will display the model's confusion matrix and accuracy, as shown below.

Confusion matrix showing model performance across handwritten digit classes 0-9

Final validation accuracy metrics and per-class precision


4. Model Conversion

After training and validation, to run the model on OKMX8MP, it must be converted into a .tflite format file. Click DEPLOY to enter the conversion interface. As shown in the figure below.

Navigating to the Deploy tab for model export

Deployment dashboard options for exporting models

In the left dropdown menu, select the export type. This document exports in TensorFlow Lite format. For lightweight deployment, both input and output data types are set to int8. The parameters are set as shown below.

Export settings selecting TensorFlow Lite and int8 quantization

Final export summary showing the .tflite file generation details

After setting the parameters, select EXPORT MODEL to export the .tflite model, which can then be deployed to OKMX8MP.


5. Model Prediction

Before performing predictions, the following files need to be prepared:

  • mobilen_v3.tflite
  • Handwritten digit image files for prediction
  • Python script for loading the model and image preprocessing

The .tflite file can be exported after model validation. Handwritten digit images can be selected from the dataset or manually created, then converted into 28x28 black-background white-digit images. This document uses the following 30 images for prediction, named in the format ''group_number_label'' As shown below.

Group 1: Handwritten digit 0

Group 2: Handwritten digit 0

Group 3: Handwritten digit 0

Write the Python script.

import numpy as np
from PIL import Image
import tflite_runtime.interpreter as tflite
# ---------------- Configuration ----------------
MODEL_PATH = "/home/root/mobilenet_v3.tflite"
IMAGE_PATHS = [
"/home/root/1_0.jpg",
''/home/root/1_1.jpg",
''/home/root/1_2.jpg",
''/home/root/1_3.jpg",
''/home/root/1_4.jpg",
''/home/root/1_5.jpg",
''/home/root/1_3.jpg",
"/home/root/1_7.jpg",
''/home/root/1_8.jpg",
''/home/root/1_9.jpg",
"/home/root/2_0.jpg",
''/home/root/2_1.jpg",
''/home/root/2_2.jpg",
''/home/root/2_3.jpg",
''/home/root/2_4.jpg",
''/home/root/2_5.jpg",
''/home/root/2_3.jpg",
"/home/root/2_7.jpg",
''/home/root/2_8.jpg",
''/home/root/2_9.jpg",
"/home/root/3_0.jpg",
''/home/root/3_1.jpg",
''/home/root/3_2.jpg",
''/home/root/3_3.jpg",
''/home/root/3_4.jpg",
''/home/root/3_5.jpg",
''/home/root/3_3.jpg",
"/home/root/3_7.jpg",
''/home/root/3_8.jpg",
''/home/root/3_9.jpg",
]
# ---------------- Load Model ----------------
interpreter = tflite.Interpreter(model_path=MODEL_PATH)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Model input information
input_shape = input_details[0]['shape'] # [1, H, W, C]
height, width, channels = input_shape[1], input_shape[2], input_shape[3]
input_dtype = input_details[0]['dtype'] # np.float32 或 np.int8
# Quantization parameter (if it is int8)
scale, zero_point = input_details[0]['quantization']
# ---------------- Prediction ----------------
for img_path in IMAGE_PATHS:
# Open the image and convert it to RGB (3 channels)
img = Image.open(img_path).convert('RGB')
img = img.resize((width, height))
# Convert to numpy array
img_array = np.array(img, dtype=np.float32)
# If the training data is white background with black text, it can be reversed.
#img_array = 255 - img_array
# Normalize to 0~1
img_array = img_array / 255.0
# Adjust the shape to [1, H, W, 3]
img_array = img_array.reshape(1, height, width, channels)
# If the model is quantized to int8, then convert
if input_dtype == np.int8:
img_array = img_array / scale + zero_point
img_array = np.round(img_array).astype(np.int8)
# Set input
interpreter.set_tensor(input_details[0]['index'], img_array)
# Inference
interpreter.invoke()
# Get output
output_data = interpreter.get_tensor(output_details[0]['index'])
predicted_label = np.argmax(output_data)
print(f"Image {img_path} prediction results: {predicted_label}")

Copy all three files into OKMX8MP, as shown below.

Command terminal showing the prediction script and model file in the target directory

Enter the following command to run the prediction:

python3 demo.py

The output is as follows:

Terminal output displaying predicted labels for each digit image

Based on the output results, it can be observed that the image 3_9.jpg has a true label of 9, but the model predicts it as 7. All other images are predicted correctly. The image 3_9.jpg is shown below:

Detailed view of the incorrectly predicted digit 9 image

From the prediction results, it can be concluded that the trained model achieves high accuracy and maintains good performance when deployed on OKMX8MP




Contact Sales Team

Our sales team will connect you with FAE engineers for one-on-one technical support.

Talk to Our Engineers

Get a Quote

Get pricing and project evaluation support from our team.

Request a Quote

Apply for Samples

Submit your request to receive product samples for evaluation.

Get Samples

Join Facebook Group

Get Forlinx technical updates and hands-on sharing from our experts.

Join Now