Computer Vision Hardware
Choose the hardware accelerator that maximizes the performance of your application for any type of processor.
These CPUs offer the most universal option for computer vision tasks. With multiple product lines to choose from, you can find a range of price and performance options to meet your application and budget needs.
Intel® Processor Graphics
Many Intel processors contain integrated graphics, including Intel HD Graphics and Intel® UHD Graphics. The GPUs have a range of general-use and fixed-function capabilities (including Intel® Quick Sync Video) that can be used to accelerate media, inference, and general computer vision operations.
Gain cost savings and revenue growth from integrated circuits that retrieve and classify data in real time. Use these accelerators for AI inferencing as a low-latency solution for safer and interactive experiences that can be applied to autonomous vehicles, robotics, IoT, and data centers.
Available in a small form factor (as a PCIe* add-in card), this design enables deep learning inference at low power and low latency. It is well suited for real-time applications with limited space and power budget such as surveillance, retail, medical, and machine vision.
This design clusters multiple Intel® Movidius™ Vision Processing Units (VPU) (1~N) on an add-on card or rack-mount module server to provide deep learning inference acceleration. This family of vision accelerator design products comes in multiple form factors to cater to a wide range of vertical use cases.
Ready to Get Started?
Try out hardware powered by the Intel Distribution of OpenVINO toolkit remotely using the award-winning1 Intel® DevCloud for the Edge.
Note Intel DevCloud for the Edge is currently available for enterprise developers only. Use your corporate email to apply.
1Intel DevCloud for the Edge is the 2020 Vision Product of the Year in the Developer Tool category as awarded by the Edge AI and Vision Alliance.
Develop and optimize classic computer vision applications built with the OpenCV library and other industry tools.
Accelerate and deploy neural network models across Intel® platforms with a built-in model optimizer for pretrained models and an inference engine runtime for hardware-specific acceleration.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.