Intel® Math Kernel Library Improved Small Matrix Performance Using Just-in-Time (JIT) Code Generation for Matrix Multiplication (GEMM)

    The most commonly used and performance-critical Intel® Math Kernel Library (Intel® MKL) functions are the general matrix multiply (GEMM) functions.

作者: Gennady F. (Blackbelt) 最后更新时间: 2019/03/21 - 03:01

Getting Started with Intel® Optimization for PyTorch* on Second Generation Intel® Xeon® Scalable Processors

Accelerate deep learning PyTorch* code on second generation Intel® Xeon® Scalable processor with Intel® Deep Learning Boost.
作者: Nathan Greeneltch (Intel) 最后更新时间: 2019/10/15 - 16:50