FPGA AI Suite

Overview

FPGA AI Inference Development Flow

This end-to-end AI workflow integrates hardware and software for deploying models on FPGAs: 

1. Model Optimization: OpenVINO™ converts pretrained models to IR files (.xml, .bin).

2. Compilation: FPGA AI Suite estimates area/performance or generates optimized architectures, then compiles network files into a .bin for FPGA and/or CPU.

3. Inference Runtime: The .bin file is loaded by your application at runtime via Inference Engine and FPGA AI APIs for memory and hardware scheduling.

4. Reference Designs: Sample designs show how to run inference on FPGA with x86, Arm, or hostless setups.

5. Software Emulation: Evaluate FPGA AI IP accuracy using OpenVINO plugin—no hardware required (Agilex™ 5 only).

System Level Architectures

FPGA AI Suite is flexible and configurable for a variety of system-level use cases. Figure 1. lists the typical ways to incorporate the FPGA AI Suite IP into a system. The use cases span different verticals, from optimized embedded platforms to applications with host CPUs to data center environments with processors. It supports hostless designs and soft processors such as the Nios® V processors.

Getting Started with the FPGA AI Suite

  1. Download the OpenVINO Toolkit 

    FPGA AI Suite requires the OpenVINO toolkit for design and development.

  2. Purchase FPGA AI Suite Unlimited Inferences from DigiKey or Mouser

    This option allows you to run unlimited inferences on hardware.

    Ordering code: SW-FPGAAISUITE

  3. Download FPGA AI Suite Limited Inferences

    This option allows you to run limited inferences on hardware without purchase.

  4. Browse Documentation

    Get user guides, release notes, and more.