MicroAI
Principle
The MicroAI Foundation Library defines a low-level Machine Learning framework for embedded devices. It allows to run inferences on trained Machine Learning models.
Functional Description
A typical Application using MicroAI will load a model binary file, read its input/output characteristics, and finally perform an inference.
The MicroAI integration in a VEE port relies on a native AI framework (Tensorflow-lite, ONNX Runtime, etc…) to implement all of the above functionalities.
Dependencies
LLML_impl.h
implementation (see LLML: MicroAI).
Installation
MicroAI is an additional module. To enable it, the MicroAI Pack must be installed in your VEE Port:
api("ej.api:microai:2.0.0")
Use
See MicroAI API chapter in Application Developer Guide.