Deep Learning Model Training

Overview

Learn how to train and choose a framework and network topology for your deep learning model.

Transcript

Hi. I'm Meghana Rao, and this is the AI from the Data Center to the Edge video series. In this episode, we show you some of the essential steps involved in training a deep-learning model. Some of the decision metrics will revolve around your choice of framework, for example, TensorFlow*, Caffe*, or MXNet*, and choice of network topology before training can begin. Once the framework and network have been chosen, we will show you how to take advantage of Intel® processors and Intel-optimized frameworks for training.

Training a model depends on the choice of framework and network. These are design choices and need careful consideration. Let's look at choosing a framework. The metrics to keep in mind are:

  • Open source availability and level of adoption
  • Optimizations on the CPU
  • Graph visualization and debugging
  • Library management
  • Inference targets, like CPU, integrated graphics, and [the] Intel® Neural Compute Stick [sic] or Intel® FPGA

Likewise, choosing a network depends on the time to train, the size of the network, inference speed, and the accuracy of the training model. This chapter explains each of these considerations. The course uses Intel® Optimization for TensorFlow* and Inception v3 for the identification of stolen cars problem based on these metrics.

The next step is to begin training. Here you will learn how to tweak some of the performance flags to get the most optimal results for training on Intel® Xeon® Scalable processors.

Thanks for watching this episode of AI from the Data Center to the Edge. Make sure to check out the links to register for the course. You can complete the lecture and the notebooks listed in the resources for this course, and join me in the next episode to learn more about model analysis and hyperparameter tuning.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804