FPGAs provide flexibility for AI system architects searching for competitive deep learning accelerators that also support differentiating customization. The ability to tune the underlying hardware architecture, including variable data precision, and software-defined processing allows the FPGA to deploy state-of-the-art innovations as they emerge. Other customizations include co-processing of custom user functions adjacent to the software-defined deep neural network. Underlying applications are in-line image & data processing, front-end signal processing, network ingest, and IO aggregation. When packaged with Intel® OpenVINO™ toolkit, users now have a complete top to bottom customizable inference solution.
Hear the Chip Chat Podcast
Listen to Intel FPGA Director of Product Marketing Tony Kau as he discusses Intel FPGAs and how acceleration hardware is being used for AI inference.
Intel FPGA Solution Brief
The Intel FPGA Deep Learning Acceleration Suite is available as part of the Intel OpenVINO toolkit for your AI solutions. Learn how FPGAs are used to accelerate deep learning inference workloads as a part of the Intel AI portfolio.
At the Microsoft* Build conference, Microsoft debuted Azure Machine Learning Hardware Accelerated Models powered by Project Brainwave integrated with the Microsoft Azure Machine Learning SDK for preview.