Skip to main content

Edge AI for Developers

Build, Optimize and Deploy Edge AI on Synaptics Astra™ with best-in-class hardware, open tools and a proven path to scale.

Astra SL-Series →Astra SR-Series →Navigate AI Dev Zone→

Get started in minutes


Intro to Edge AI

Learn about running AI models directly on embedded devices in real-time.
Learn more →

Quick Start with SL-Series

Get started with Quick Tutorials and embark your Edge AI journey with Machina Dev Kits.
Learn more →

Evaluate Models for SR-Series

Evaluate your models for High-Performance SR-Series AI MCUs.
Learn more →

Models ready to go


Get your project started in minutes with the optimized models preinstalled on Synaptics Astra

Edge AI efficiency


The hardware-aware SyNAP compiler targets the exact NPU or GPU resources available on-chip, which can significantly improve inference speed. There are also advanced optimization options, such as mixed-width and per-channel quantization.

Bring your own model


Have a different model you'd like to bring? Target it to Astra's on-chip NPU or GPU with one command:

$ synap convert --target {$CHIP_MODEL}  --model example.torchscript

Reference Docs


🤖 SyNAP AI Toolkit

Deep dive into the SyNAP toolkit for building NPU-accelerated apps on SL-Series.

Read more →

⚙️ Advanced Optimization

Learn how to convert your existing AI models to run on Synaptics Astra SL-Series.

Read more →

💻 Astra SL SDK

Get started with the Synaptics Astra SL-Series SDK documentation.

Read more →