MicroAI
Principle
The MicroAI Foundation Library defines a low-level Machine Learning framework for embedded devices. It allows to run inferences on trained Machine Learning models.
Functional Description
A typical Application using MicroAI will load a model binary file, read its input/output characteristics, and finally perform an inference.
The MicroAI integration in a VEE port relies on a native AI framework (Tensorflow-lite, ONNX Runtime, etc…) to implement all of the above functionalities.
Dependencies
LLML_impl.h
implementation (see LLML: MicroAI).A port of MicroAI for TensorFlow Lite can be found in MicroAI Abstraction Layer for TensorFlow Lite.
Installation
MicroAI is an additional module. To enable it, the MicroAI Pack must be installed in your VEE Port:
microejPack("ej.api:microai:2.1.0")
Use
See MicroAI API chapter in Application Developer Guide.