This repository provides AI example applications for the Synaptics Astra SL16xx series, covering computer vision, speech processing, and large language models (LLMs). Follow the instructions below to set up your environment and run various AI examples in few minutes.
The examples in this repository are designed to work with Astra SL1680 processor using Astra Machina Dev Kit. Vision examples leverage NPU and other examples leverage CPU.
Note: For Astra SL1640 processor (leveraging NPU), these examples can be still run, after adding required set of packages into the OOBE image via bitbake.
Note: For Astra SL1620 processor (leveraging GPU), Vision - Classification Model is not pre-installed. Other examples can be still run, after adding required set of packages into the OOBE image via bitbake.
- Astra β Explore the Astra AI platform.
- Astra Machina β Discover our powerful development kit.
- AI Developer Zone β Find step-by-step tutorials and resources.
For instructions on how to set up Astra Machina board , see the Setting up the hardware guide.
Clone the repository using the following command:
git clone https://github.com/synaptics-synap/examples.gitNavigate to the Repository Directory:
cd examplesTo get started, set up your Python environment. This step ensures all required dependencies are installed and isolated within a virtual environment:
python3 -m venv .venv --system-site-packages
source .venv/bin/activate
pip install --upgrade pipFor Astra OOBE SDK 2.0 (scarthgap) and above, Please use Python3.12 packages:
pip install -r requirements-py312.txt
For Astra OOBE SDK 1.8 (kirkstone) and below, Please use Python3.10 packages:
pip install -r requirements.txt
To run a YOLOv8-small image classification model on a Image:
python3 -m vision.image_class samples/fish.jpgTo run a YOLOv8-small body pose model using a connected camera and you can Infer results using :
python3 -m vision.body_pose 'cam'Moonshine is a speech-to-text model that provides translation from speech to text.
To transcribe an audio file ( for examplejfk.wav):
python3 -m speech_to_text.moonshine 'samples/jfk.wav'To enable real-time speech transcription using a USB microphone (such as one from a Webcam or a Headphone):
python3 -m speech_to_text.pipelineConvert a given text string into synthetic speech using Piper:
python3 -m text_to_speech.piper "synaptics astra example"Warning
This step will overwrite existing sqlite3 libraries if present. It is recommended to verify that sqlite3 isn't installed before continuing:
(ls /usr/lib/libsqlite3* 1>/dev/null && python3 -c "import sqlite3") && echo "sqlite3 available"
SQLite3 is required for certain AI model operations and may not be pre-installed in SDK <1.6.0. Install it using the following commands:
wget https://synaptics-synap.github.io/examples-prebuilts/packages/sqlite3_3.38.5-r0_arm64.deb
wget https://synaptics-synap.github.io/examples-prebuilts/packages/python3-sqlite3_3.10.13-r0_arm64.deb
dpkg -i python3-sqlite3_3.10.13-r0_arm64.deb sqlite3_3.38.5-r0_arm64.debThis command installs llama-cpp-python, which enables running large language models efficiently. We have our prebuilt version for Astra.
For Astra OOBE SDK 2.0 (scarthgap) and above, Please use Python 3.12 version:
pip install https://synaptics-synap.github.io/examples-prebuilts/packages/llama_cpp_python-0.3.16-cp312-cp312-linux_aarch64.whl
For Astra OOBE SDK 1.8 (kirkstone) and below, Please use Python 3.10 version:
pip install https://synaptics-synap.github.io/examples-prebuilts/packages/llama_cpp_python-0.3.14-cp310-cp310-linux_aarch64.whl
To run large language models such as Gemma, Qwen and DeepSeek:
For interactive chat example:
python3 -m llm.gemmaFor chat completion example:
python3 -m llm.qwen
#python3 -m llm.deepseekGet a gist for Embeddings and how to generate sentence embeddings using MiniLM, a lightweight transformer-based model:
python3 -m embeddings.minilm "synaptics astra example!"Launch an AI-powered text assistant with Tool calling functionality:
python3 -m assistant.toolcallPlease see synap-rt for running real-time GStreamer based inference
- AI Developer Zone β Find step-by-step tutorials and resources.
- GitHub Synap-RT β ExploreReal-time AI pipelines with Python.
- GitHub SyNAP-Python-API β Python bindings that closely mirror our SyNAP C++ API.
- GitHub SyNAP C++ β Low-level access to our SyNAP C++ AI Framework
- GitHub Astra SDK β Get started with the Astra SDK for AI development.
- GitHub Examples Pre-builts β Pre-built packages for Astra Machina.
We encourage and appreciate community contributions! Hereβs how you can get involved:
- Contribute to our Community β Share your work and collaborate with other developers.
- Suggest Features and Improvements β Have an idea? Let us know how we can enhance the project.
- Report Issues and Bugs β Help us improve by identifying and reporting any issues.
Your contributions make a difference, and we look forward to your input!
This project is licensed under the Apache License, Version 2.0.
See the LICENSE file for details.


