Article

最大限度提升 CPU 上的 TensorFlow* 性能:推理工作负载的注意事项和建议

本文将介绍使用面向 TensorFlow 的英特尔® 优化* 进行 CPU 推理的性能注意事项
Authored by Nathan Greeneltch (Intel) Last updated on 08/09/2019 - 02:02
Article

OpenVINO™ 工具套件版本说明

OpenVINO™ 2018 R3 Release - Gold release of the Intel® FPGA Deep Learning Acceleration Suite accelerates AI inferencing workloads using Intel® FPGAs that are optimized for performance, power, and cost, Windows* support for the Intel® Movidius™ Neural Compute Stick, Python* API preview that supports the inference engine, Open Neural Network Exchange (ONNX) Model Zoo provides initial support for...
Authored by Deanne Deuermeyer (Intel) Last updated on 10/22/2018 - 23:52
Article

推理引擎开发人员指南

Deploying deep learning networks from the training environment to embedded platforms for inference is a complex task. The Inference Engine deployment process converts a trained model to an Intermediate Representation.
Authored by Deanne Deuermeyer (Intel) Last updated on 11/12/2018 - 01:15