Deep neural networks are capable of amazing levels of representation power resulting in state-of-the-art accuracy in areas such as computer vision, speech recognition, natural language processing, and various data analytic domains. Deep networks require large amounts of computation to train. Intel is optimizing popular frameworks such as Caffe*, TensorFlow*, Theano*, and others to significantly improve performance, reducing the overall time to train on a single node. Intel is also enhancing multi-node distributed training capabilities to these frameworks to share the computational requirements across multiple nodes and further reduce time to train. A workload that previously required days can now be trained in a matter of hours.
In this webinar we describe various deep learning usages and highlight those in which Caffe was used, and describe how Caffe is optimized for Intel® architecture.
What You Can Expect to Learn:
- Deep Learning Usages
- Integration of MKL into Caffe
- How to Use Caffe