Unlock insights to deploy AI models with streamlined training to inference using the Intel® Distribution of OpenVINO™ toolkit, Microsoft Azure*, and Open Neural Network Exchange (ONNX*) Runtime.
Tune in to hear Intel product experts Savitha Gandikota and Arindam Paul, and Microsoft* principal program manager Manash Goswami discuss how to train on Microsoft Azure*, streamline on ONNX Runtime, and infer on the Intel Distribution of OpenVINO toolkit to accelerate the time to production. With ready-to-use apps available on the Microsoft Azure marketplace, take advantage of the power of a streamlined train-to-deployment pipeline.
In this webinar, you can:
- Get an overview how to accelerate train-to-deploy workflows
- See relevant demonstrations
- Learn how to use these applications
Edge-to-cloud solutions product manager, technical business leader, Intel Corporation
Savitha drives edge AI. She brings a unique blend of expertise with hardware and software architectures and technologies through her experiences from server, networking, and embedded industries. Her passion for building products from the ground up keeps her busy in driving core capabilities needed for the edge computing revolution. Savitha believes that disruption due to AI is here and building scalable edge-to-cloud solutions is the key to success.
Product manager, Intel Corporation
Arindam is a veteran in the technology industry. He has led teams in Dell EMC*, Cisco*, Akamai*, and Brocade* to market leading innovations. Insanity* workouts keep him hungry and technology innovations keep him foolish.
Principal program manager in the AI Frameworks team, Microsoft Corporation
Manash is responsible for defining the strategy for integrating hardware platforms to enable running models for machine learning with the ONNX Runtime and enabling inference solutions in mobile and IoT platforms with ONNX Runtime.