A versão do navegador que você está usando não é recomendada para este website. Considere a possibilidade de fazer a atualização para a última versão do seu navegador clicando em um dos links a seguir.
This high-level inference API includes an interface that is implemented as dynamically loaded plugins for each target hardware type. The Inference Engine delivers optimal performance for each type without requiring the implementation and maintenance of multiple code pathways.
Take advantage of a portfolio of scalable hardware solutions to meet the various performance, power, and price requirements of any use case, powered by the Intel Distribution of OpenVINO toolkit. In addition, use the Deployment Manager to generate an optimal, minimized runtime package for deployment.
These CPUs offer the most universal option for computer vision tasks. With multiple product lines to choose from, you can find a range of price and performance options to meet your application and budget needs.
Many Intel® processors contain integrated graphics, including Intel® HD Graphics and Intel® UHD Graphics. The GPUs have a range of general-use and fixed-function capabilities (including Intel® Quick Sync Video) that can be used to accelerate media, inference, and general computer vision operations.
Gain cost savings and revenue growth from integrated circuits that retrieve and classify data in real time. Use these accelerators for AI inferencing as a low-latency solution for safer and interactive experiences that can be applied to autonomous vehicles, robotics, IoT, and data centers.
Intel® Movidius™ Vision Processing Unit (VPU)
This unit enables visual intelligence at a high compute per watt. It supports camera processing, computer vision, and deep learning inferences.
Intel® Vision Accelerator Design
Deploy power-efficient deep neural network inference for fast, accurate video analytics and computer vision applications.
Intel® Gaussian & Neural Accelerator
This accelerator is a low-power neural coprocessor for continuous inference at the edge. It is designed for offloading continuous inference workloads including but not limited to noise reduction or speech recognition to save power and free CPU resources.