Intel® FPGA AI Portfolio

Real-Time AI Optimized for Performance, Power, and Cost

The Intel® FPGA Deep Learning Acceleration Suite is an optimized set of deep learning frameworks and topologies that enable convolutional neural network (CNN)-based inference to be accelerated on FPGAs. Intel FPGAs offer a flexible platform that allow for customizable performance, customizable power, high-throughput, and low-batch latency that can be designed to your exact specification. 

The Intel FPGA Deep Learing Acceleration Suite is part of Intel OpenVINO™ a comprehensive solution for artificial intelligence.

What to know:

Works best for:

  • Smart City
  • Smart Retail
  • Smart Factory

Intel Programmable Acceleration Card with Intel Arria® 10 GX FPGA

Intel FPGA-based acceleration platforms include PCIe-based programmable acceleration cards, socket-based server platforms with integrated FPGAs, and others that are supported by the Acceleration Stack for Intel Xeon® CPU with FPGAs. Intel platforms are qualified and validated for several leading original equipment manufacturers (OEM) server providers to support large scale FPGA deployment.   

Intel Arria 10 GX FPGA Development Kit

The Intel Arria 10 GX FPGA Development Kit delivers a complete design environment that includes all hardware and software you need to start taking advantage of the performance and capabilities that are available in Intel Arria 10 GX FPGAs for your design needs.

Intel FPGA Deep Learning Acceleration Suite

The Intel FPGA Deep Learning Acceleration Suite introduces acceleration hardware as part of the comprehensive Intel AI portfolio. The Intel FPGA Deep Learning Acceleration Suite is available in the Intel OpenVINO toolkit for AI. Learn more about how FPGAs are being used to accelerate inference workloads.

Intel OpenVINO

The Intel OpenVINO toolkit enables the development of applications and solutions that emulate human vision. Based on convolutional neural networks (CNN), the SDK extends workloads across Intel hardware and maximizes performance.