Intel® FPGAs provide flexibility for artificial intelligence (AI) system architects searching for competitive deep learning accelerators that also support differentiating customization. The ability to tune the underlying hardware architecture, including variable data precision, and software-defined processing allows the FPGA to deploy state-of-the-art innovations as they emerge. Underlying application use include in-line image and data processing, front-end signal processing, network ingest, and I/O aggregation.
Intel FPGAs offer a cost-effective reprogrammable platform that allow for customizable performance, customizable power, high-throughput, and low-batch latency that can be designed to your exact specification. Intel FPGAs offer extremely fine-grained, on-chip bandwidth driving performance on memory-bound workloads that enable acceleration of applications from the edge of the network to the data center. Microsoft* deployed Intel® Stratix®10 FPGAs to bring real-time AI hardware microservices on Microsoft Azure* for Project Brainwave. Learn more about our collaboration with Microsoft.
Intel leadership in technology stands out in today’s increasingly complex and heterogeneous computing world. Our mission is to deliver powerful and intuitive developer tools that can transform computer vision, deep learning and analytics processing capabilities into applications that help turn data into intelligent insights powering AI. The OpenVINO™ toolkit allows users to access various Intel architecture, the Intel® FPGA Deep Learning Acceleration Suite accesses Intel FPGAs for real-time AI by enabling a complete top-to-bottom customizable AI inference solution. Learn how you can integrate Intel FPGA into your application for real-time AI inferencing optimized for performance, power and cost.
Hear the Chip Chat Podcast
Listen to Intel FPGA Director of Product Marketing, Tony Kau, as he discusses Intel FPGAs and how acceleration hardware is being used for AI inference.
Intel FPGA Solution Brief
The Intel FPGA Deep Learning Acceleration Suite is available as part of the OpenVINO toolkit for your AI solutions. Learn how FPGAs are used to accelerate deep learning inference workloads as a part of the Intel AI portfolio.
At the Microsoft Build conference, Microsoft debuted Azure* Machine Learning Hardware Accelerated Models powered by Project Brainwave integrated with the Microsoft Azure* Machine Learning SDK for preview.
OpenVINO is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries.