Computer Vision Hardware
Choose the hardware accelerator that maximizes the performance of your application for any type of processor.
These CPUs offer the most universal option for computer vision tasks. With multiple product lines to choose from, you can find a range of price and performance options to meet your application and budget needs.
Many Intel processors contain integrated graphics, including Intel HD Graphics and Intel® UHD Graphics. The GPUs have a range of general-use and fixed-function capabilities (including Intel® Quick Sync Video) that can be used to accelerate media, inference, and general computer vision operations.
Gain cost savings and revenue growth from integrated circuits that retrieve and classify data in real time. Use these accelerators for AI inferencing as a low-latency solution for safer and interactive experiences that can be applied to autonomous vehicles, robotics, IoT, and data centers.
Available in a small form factor (as a PCIe* add-in card), this design enables deep learning inference at low power and low latency. It is well suited for real-time applications with limited space and power budget such as surveillance, retail, medical, and machine vision.
This design clusters multiple Intel® Movidius™ Vision Processing Units (VPU) (1~N) on an add-on card or rack-mount module server to provide deep learning inference acceleration. This family of vision accelerator design products comes in multiple form factors to cater to a wide range of vertical use cases.
Try out hardware powered by the Intel Distribution of OpenVINO toolkit remotely using the award-winning1 Intel® DevCloud for the Edge.
Note Intel DevCloud for the Edge is currently available for enterprise developers only. Use your corporate email to apply.
1Intel DevCloud for the Edge is the 2020 Vision Product of the Year in the Developer Tool category as awarded by the Edge AI and Vision Alliance.
Develop and optimize classic computer vision applications built with the OpenCV library and other industry tools.
Accelerate and deploy neural network models across Intel® platforms with a built-in model optimizer for pretrained models and an inference engine runtime for hardware-specific acceleration.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804