Article

Introducing Batch GEMM Operations

The general matrix-matrix multiplication (GEMM) is a fundamental operation in most scientific, engineering, and data applications. There is an everlasting desire to make this operation run faster.

Authored by Fiona Z. (Intel) Last updated on 05/30/2018 - 07:00
Article

Using Intel® MKL and Intel® TBB in the same application

Intel MKL 11.3 has introduced Intel TBB support.

Authored by Gennady F. (Blackbelt) Last updated on 08/01/2019 - 09:22
Blog post

Big Datasets from Small Experiments

Authored by Andrey Vladimirov Last updated on 07/04/2019 - 18:46
Blog post

Brain Development Simulation, 300x Faster

Authored by Andrey Vladimirov Last updated on 07/04/2019 - 17:45
Article

Intel® Math Kernel Library Improved Small Matrix Performance Using Just-in-Time (JIT) Code Generation for Matrix Multiplication (GEMM)

    The most commonly used and performance-critical Intel® Math Kernel Library (Intel® MKL) functions are the general matrix multiply (GEMM) functions.

Authored by Gennady F. (Blackbelt) Last updated on 03/21/2019 - 03:01
Article

Introduction to the Intel® MKL Extended Eigensolver

 

Authored by Zhang, Zhang (Intel) Last updated on 10/15/2019 - 16:50
Article

Parallel Direct Sparse Solver for Clusters

Product Overview

Authored by Alexander Kalinkin (Intel) Last updated on 10/15/2019 - 16:50