FPGA AI Inference Development Flow
This end-to-end AI workflow integrates hardware and software for deploying models on FPGAs:
1. Model Optimization: OpenVINO™ converts pretrained models to IR files (.xml, .bin).
2. Compilation: FPGA AI Suite estimates area/performance or generates optimized architectures, then compiles network files into a .bin for FPGA and/or CPU.
3. Inference Runtime: The .bin file is loaded by your application at runtime via Inference Engine and FPGA AI APIs for memory and hardware scheduling.
4. Reference Designs: Sample designs show how to run inference on FPGA with x86, Arm, or hostless setups.
5. Software Emulation: Evaluate FPGA AI IP accuracy using OpenVINO plugin—no hardware required (Agilex™ 5 only).



System Level Architectures

CPU offload

Multi-function CPU offload

Ingest / Inline Processing + AI

Embedded SoC FPGA + AI
Getting Started with the FPGA AI Suite
-
FPGA AI Suite requires the OpenVINO toolkit for design and development.
-
Purchase FPGA AI Suite Unlimited Inferences from DigiKey or Mouser
This option allows you to run unlimited inferences on hardware.
Ordering code: SW-FPGAAISUITE
-
Download FPGA AI Suite Limited Inferences
This option allows you to run limited inferences on hardware without purchase.
-
Get user guides, release notes, and more.