Blog post

Celebrating a Decade of Parallel Programming with Intel® Threading Building Blocks (Intel® TBB)

This year marks the tenth anniversary of Intel® Threading Building Blocks (Intel® TBB).

Authored by Sharmila C. (Intel) Last updated on 10/15/2019 - 18:16
Article

Caffe* Optimized for Intel® Architecture: Applying Modern Code Techniques

This paper demonstrates a special version of Caffe* — a deep learning framework originally developed by the Berkeley Vision and Learning Center (BVLC) — that is optimized for Intel® architecture.
Authored by Last updated on 10/15/2019 - 15:30
Article

面向英特尔® 架构优化的 Caffe*:使用现代代码技巧

This paper demonstrates a special version of Caffe* — a deep learning framework originally developed by the Berkeley Vision and Learning Center (BVLC) — that is optimized for Intel® architecture.
Authored by Last updated on 10/15/2019 - 16:50
Article

Benefits of Intel® Optimized Caffe* in comparison with BVLC Caffe*

Overview
Authored by JON J K. (Intel) Last updated on 05/30/2018 - 07:00
Article

Intel® Math Kernel Library for Deep Learning Networks: Part 1–Overview and Installation

Learn how to install and build the library components of the Intel MKL for Deep Neural Networks.
Authored by Bryan B. (Intel) Last updated on 03/11/2019 - 13:17
Article

Intel® Math Kernel Library for Deep Neural Networks: Part 2 – Code Build and Walkthrough

Learn how to configure the Eclipse* IDE to build the C++ code sample, along with a code walkthrough based on the AlexNet deep learning topology for AI applications.
Authored by Bryan B. (Intel) Last updated on 05/23/2018 - 11:00
Article

英特尔® MKL-DNN:第一部分 – 库的概述和安装

英特尔 MKL-DNN 教程系列的开发人员简介从开发人员的角度介绍了英特尔 MKL-DNN。第一部分提供了丰富的资源,详细介绍了如何安装和构建库组件。
Authored by Bryan B. (Intel) Last updated on 05/08/2018 - 10:50
Article

英特尔® MKL-DNN:第二部分 – 代码示例创建与详解

在本篇中 (系列教程第二部分),将介绍如何配置集成开发环境 (IDE),以创建 C++ 代码示例,并提供基于 AlexNet* 深度学习拓扑的代码详解。
Authored by Bryan B. (Intel) Last updated on 05/23/2018 - 11:00
File Wrapper

Parallel Universe Magazine - Issue 28, April 2017

Authored by admin Last updated on 12/09/2019 - 11:40
Article

Intel® Math Kernel Library Improved Small Matrix Performance Using Just-in-Time (JIT) Code Generation for Matrix Multiplication (GEMM)

    The most commonly used and performance-critical Intel® Math Kernel Library (Intel® MKL) functions are the general matrix multiply (GEMM) functions.

Authored by Gennady F. (Blackbelt) Last updated on 03/21/2019 - 03:01