File Wrapper

Parallel Universe Magazine - Issue 27, January 2017

Authored by admin Last updated on 03/21/2019 - 12:00
Article

Maximize TensorFlow* Performance on CPU: Considerations and Recommendations for Inference Workloads

This article will describe performance considerations for CPU inference using Intel® Optimization for TensorFlow*
Authored by Nathan Greeneltch (Intel) Last updated on 07/31/2019 - 12:11
Article

Caffe* Optimized for Intel® Architecture: Applying Modern Code Techniques

This paper demonstrates a special version of Caffe* — a deep learning framework originally developed by the Berkeley Vision and Learning Center (BVLC) — that is optimized for Intel® architecture.
Authored by Last updated on 07/06/2019 - 16:40
Article

Benefits of Intel® Optimized Caffe* in comparison with BVLC Caffe*

Overview
Authored by JON J K. (Intel) Last updated on 05/30/2018 - 07:00
Article

Recipe: Optimized Caffe* for Deep Learning on Intel® Xeon Phi™ processor x200

The computer learning code Caffe* has been optimized for Intel® Xeon Phi™ processors. This article provides detailed instructions on how to compile and run this Caffe* optimized for Intel® architecture to obtain the best performance on Intel Xeon Phi processors.
Authored by Vamsi Sripathi (Intel) Last updated on 03/21/2019 - 12:40