‹ Back to Video Series: AI from the Data Center to the Edge

Deep Learning Model Training

  • Overview
  • Resources
  • Transcript

Learn how to train and choose a framework and network topology for your deep learning model.

Hi. I'm Meghana Rao, and this is the AI from the Data Center to the Edge video series. In this episode, we show you some of the essential steps involved in training a deep-learning model. Some of the decision metrics will revolve around your choice of framework, for example, TensorFlow*, Caffe*, or MXNet*, and choice of network topology before training can begin. Once the framework and network have been chosen, we will show you how to take advantage of Intel® processors and Intel-optimized frameworks for training.

Training a model depends on the choice of framework and network. These are design choices and need careful consideration. Let's look at choosing a framework. The metrics to keep in mind are:

  • Open source availability and level of adoption
  • Optimizations on the CPU
  • Graph visualization and debugging
  • Library management
  • Inference targets, like CPU, integrated graphics, and [the] Intel® Neural Compute Stick [sic] or Intel® FPGA

Likewise, choosing a network depends on the time to train, the size of the network, inference speed, and the accuracy of the training model. This chapter explains each of these considerations. The course uses Intel® Optimization for TensorFlow* and Inception v3 for the identification of stolen cars problem based on these metrics.

The next step is to begin training. Here you will learn how to tweak some of the performance flags to get the most optimal results for training on Intel® Xeon® Scalable processors.

Thanks for watching this episode of AI from the Data Center to the Edge. Make sure to check out the links to register for the course. You can complete the lecture and the notebooks listed in the resources for this course, and join me in the next episode to learn more about model analysis and hyperparameter tuning.