Article

推理引擎开发人员指南

Deploying deep learning networks from the training environment to embedded platforms for inference is a complex task. The Inference Engine deployment process converts a trained model to an Intermediate Representation.
Authored by Deanne Deuermeyer (Intel) Last updated on 11/12/2018 - 01:15