Running the TensorFlow examples in MicroPython
The MicroPython interface to TensorFlow
The MicroPython interface to TensorFlow is implemented as a module written in C. It contains several Python classes:
- interpreter: gives access to the tflite-micro runtime interpreter
- tensor: allows to define and fille the input tensor and to interpret the output tensor
- audio_frontend: needed for the wake words example.
Running inference on the ESP32
After having defined and trained the model with TensorFlow, it must be converted into a TensorFlow Lite model, which is an optimized FlatBuffer format identified by the .tflite extension. This file must be transfered to the MicroPython file system. In my examples I create a "models" folder on the ESP32 into which I save the models. You can easily find the size of the model with the ls -l command on the PC. The model is read into a bytearray:
Once the model is loaded we can create the runtime interpreter:
input_callback is called when you invoke the interpreter. It gives access to the input tensor and allows to fill it using the setValue method of the tensor class. In the example below, the input tensor is filled with pixel values on an image that has previously been read from the ESP32 file system.
Finally the output_callback is called in which you access the output tensor and interpret it.
The classes and their resources
tensor
The tensor class has the methods:
- getValue(index)
- setValue(index,value)
- getType()
- quantizeFloatToInt8()
- quantiteInt8ToFloat()
The interpreter
The interpreter has methods to get the input and output tensors and for invocation:
- getInputTensor(tensor_number)
- getOutputTensor(tensor_number)
- invoke()
Here is a description of the MicroPython version of the examples:
--
Uli Raich - 2023-09-07
Comments