Intel® Math Kernel Library

Fastest and most used math library for Intel and compatible processors**

  • Vectorized and threaded for highest performance on all Intel and compatible processors
  • De facto standard APIs for simple code integration
  • Compatible with all C, C++ and Fortran compilers
  • Royalty-free, per developer licensing for low cost deployment

From $499
Buy Now

Or Download a Free 30-Day Evaluation Version

Performance: Ready to Use

Intel® Math Kernel Library (Intel® MKL) includes a wealth of routines to accelerate application performance and reduce development time. Today’s processors have increasing core counts, wider vector units and more varied architectures. The easiest way to take advantage of all of that processing power is to use a carefully optimized computing math library designed to harness that potential. Even the best compiler can’t compete with the level of performance possible from a hand-optimized library.

Because Intel has done the engineering on these ready-to-use, royalty-free functions, you’ll not only have more time to develop new features for your application, but in the long run you’ll also save development, debug and maintenance time while knowing that the code you write today will run optimally on future generations of Intel processors.

Intel® MKL includes highly vectorized and threaded Linear Algebra, Fast Fourier Transforms (FFT), Vector Math and Statistics functions. Through a single C or Fortran API call, these functions automatically scale across previous, current and future processor architectures by selecting the best code path for each.


Intel® MKL delivers industry-leading performance on Monte Carlo and other math-intensive routines


Quotes

“I’m a C++ and Fortran developer and have high praise for the Intel® Math Kernel Library. One nice feature I’d like to stress is the bitwise reproducibility of MKL which helps me get the assurance I need that I’m getting the same floating point results from run to run."
Franz Bernasek
CEO and Senior Developer, MSTC Modern Software Technology

“Intel MKL is indispensable for any high-performance computer user on x86 platforms.”
Prof. Jack Dongarra,
Innovative Computing Lab,
University of Tennessee, Knoxville

Comprehensive Math Functionality – Covers Range of Application Needs
Click to enlarge

Comprehensive Math Functionality – Covers Range of Application Needs

Intel® MKL contains a wealth of threaded and vectorized complex math functions to accelerate a wide variety of software applications. Why write these functions yourself when Intel has already done the work for you?

Major functional categories include Linear Algebra, Fast Fourier Transforms (FFT), Vector Math and Statistics. Cluster-based versions of LAPACK and FFT are also included to support MPI-based distributed memory computing.

Standard APIs – For Immediate Performance Results

Click to enlarge

Standard APIs – For Immediate Performance Results

Wherever available, Intel® MKL uses de facto industry standard APIs so that minimal code changes are required to switch from another library. This makes it quick and easy to improve your application performance through simple function substitutions or relinking.

Simply substituting Intel® MKL’s LAPACK (Linear Algebra PACKage), for example, can yield 500% or higher performance improvement (benchmark left.)

In addition to the industry-standard BLAS and LAPACK linear algebra APIs, Intel® MKL also supports MIT’s FFTW C interface for Fast Fourier Transforms.

Highest Performance and Scalability across Past, Present & Future Processors – Easily and Automatically
Click to enlarge

Highest Performance and Scalability across Past, Present & Future Processors – Easily and Automatically

Behind a single C or Fortran API, Intel® MKL includes multiple code paths -- each optimized for specific generations of Intel and compatible processors. With no code-branching required by application developers, Intel® MKL utilizes the best code path for maximum performance.

Even before future processors are released, new code paths are added under these same APIs. Developers just link to the newest version of Intel® MKL and their applications are ready to take full advantage of the newest processor architectures.

In the case of the Intel® Many Integrated Core Architecture (Intel® MIC Architecture), in addition to full native optimization support, Intel® MKL can also automatically determine the best load balancing between the host CPU and the Intel® Xeon® Phi™ coprocessor.

Flexibility to Meet Developer Requirements
Click to enlarge

Flexibility to Meet Developer Requirements

Developers have many requirements to meet. Sometimes these requirements conflict and need to be balanced. Need consistent floating point results with the best application performance possible? Want faster vector math performance and don’t need maximum accuracy? Intel® MKL gives you control over the necessary tradeoffs.

Intel® MKL is also compatible with your choice of compilers, languages, operating systems, linking and threading models. One library solution across multiple environments means only one library to learn and manage.

FeatureBenefit
Conditional Numerical Reproducibility

Overcome the inherently non-associativity characteristics of floating-point arithmetic results with new support in the Intel MKL. New in this release is the ability to achieve reproducibility without memory alignment.

New and improved optimizations for Haswell Intel® Core™, Intel® microarchitecture code name Ivy Bridge, future Broadwell processors and Intel® Xeon® Phi™ coprocessors

Intel MKL is optimized for the latest and upcoming processor architectures to deliver the best performance in the industry. For example, new optimizations for the fusedmultiply-add (FMA) instruction set introduced in Haswell Core processors deliver up to 2x performance improvement for floating point calculations.

Automatic offload and compute load balancing between Intel Xeon processors and Intel Xeon Phi coprocessors – Now for Windows*

For selected linear algebra functions, Intel MKL can automatically determine the best way to utilize a system containing one or more Intel Xeon Phi coprocessors. The developer simply calls the MKL function and it will take advantage of the coprocessor if present on the system. New functions added for this release plus Windows OS support.

ExtendedEigensolver Routines based on the FEAST algorithm

New sparse matrix Eigensolver routines handle larger problem sizes and use less memory. API-compatibility with the open source FEAST Eigenvalue Solver makes it easy to switch to the highly optimized Intel MKL implementation.

Linear Algebra

Intel® MKL BLAS provides optimized vector-vector (Level 1), matrix-vector (Level 2) and matrix-matrix (Level 3) operations for single and double precision real and complex types. Level 1 BLAS routines operate on individual vectors, e.g., compute scalar product, norm, or the sum of vectors. Level 2 BLAS routines provide matrix-vector products, rank 1 and 2 updates of a matrix, and triangular system solvers. Level 3 BLAS level 3 routines include matrix-matrix products, rank k matrix updates, and triangular solvers with multiple right-hand sides.

Intel® MKL LAPACK provides extremely well-tuned LU, Cholesky, and QR factorization and driver routines that can be used to solve linear systems of equations. Eigenvalue and least-squares solvers are also included, as are the latest LAPACK 3.4.1 interfaces and enhancements.

If your application already relies on the BLAS or LAPACK functionality, simply re-link with Intel® MKL to get better performance on Intel and compatible architectures.

Fast Fourier Transforms

Intel® MKL FFTs include many optimizations and should provide significant performance gains over other libraries for medium and large transform sizes. The library supports a broad variety of FFTs, from single and double precision 1D to multi-dimensional, complex-to-complex, real-to-complex, and real-to-real transforms of arbitrary length. Support for both FFTW* interfaces simplifies the porting of your FFTW-based applications.

Vector Math

Intel® MKL provides optimized vector implementations of computationally intensive core mathematical operations and functions for single and double precision real and complex types. The basic vector arithmetic operations include element-by-element summation, subtraction, multiplication, division, and conjugation as well as rounding operations such as floor, ceil, and round to the nearest integer. Additional functions include power, square root, inverse, logarithm, trigonometric, hyperbolic, (inverse) error and cumulative normal distribution, and pack/unpack. Enhanced capabilities include accuracy, denormalized number handling, and error mode controls, allowing users to customize the behavior to meet their individual needs.

Statistics

Intel® MKL includes random number generators and probability distributions that can deliver significant application performance. The functions provide the user the ability to pair Random-Number Generators such as Mersenne Twister and, Niederreiter with a variety of Probability Distributions including Uniform, Gaussian and Exponential.

Intel® MKL also provides computationally intensive core/building blocks for statistical analysis both in and out-of-core. This enables users to compute basic statistics, estimation of dependencies, data outlier detection, and missing value replacements. These features can be used to speed-up applications in computational finance, life sciences, engineering/simulations, databases, and other areas.

Data Fitting

Intel® MKL includes a rich set of splines functions for 1-dimensional interpolation. These are useful in a variety of application domains including data analytics (e.g. histograms), geometric modeling and surface approximation. Splines included are linear, quadratic, cubic, look-up, stepwise constant and user-defined.

What’s New

Conditional Bitwise Reproducible Results

When exact reproducible calculations are required, Intel® MKL gives developers control over the tradeoffs to maximize performance across a set of target processors while delivering identical floating point results

Optimized for Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® microarchitecture code name Ivy Bridge and Intel® Many Integrated Core Architecture (Intel® MIC Architecture) processor architectures

Intel® MKL is optimized for the latest and upcoming processor architectures to deliver the best performance in the industry. Support for the new digital random number generator provides truly random seeding of statistical calculations.

Automatic offload and compute load balancing between Intel® Xeon® processor and Intel® Xeon Phi™ coprocessors

For Linear Algebra functionality, Intel® MKL can automatically determine the best way to utilize a system containing one or more Intel® MIC processors. The developer simply calls an MKL function and doesn’t have to worry about the details.

Data Fitting functions

A rich set of splines are now included to optimize 1-dimensional interpolation calculations used in a variety of application domains

Conditional Bitwise Reproducible Results

When exact reproducible calculations are required, Intel® MKL gives developers control over the tradeoffs to maximize performance across a set of target processors while delivering identical floating point results

Optimized for Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® microarchitecture code name Ivy Bridge and Intel® Many Integrated Core Architecture (Intel® MIC Architecture) processor architectures

Intel® MKL is optimized for the latest and upcoming processor architectures to deliver the best performance in the industry. Support for the new digital random number generator provides truly random seeding of statistical calculations.

Automatic offload and compute load balancing between Intel® Xeon® processor and Intel® Xeon Phi™ coprocessors

For Linear Algebra functionality, Intel® MKL can automatically determine the best way to utilize a system containing one or more Intel® MIC processors. The developer simply calls an MKL function and doesn’t have to worry about the details.

Data Fitting functions

A rich set of splines are now included to optimize 1-dimensional interpolation calculations used in a variety of application domains

Click on images for a larger view of the benchmark graphic.


Linear Algebra Performance Charts


DGEMM
DGEMM Performance Benchmark

Intel® Optimized SMP LINPACK
Intel® Optimized SMP LINPACK Benchmark

HPL LINPACK
HPL LINPACK performance benchmark

LU Factorization
LU Factorization Performance Benchmark

Cholesky Factorization
Cholesky Factorization Benchmark


FFT Performance Charts


2D and 3D FFTs on Intel® Xeon and Intel® Core Processors
Cluster FFT Performance Benchmark

Cluster FFT Performance
Cluster FFT Performance Benchmark

Cluster FFT Scalability
Cluster FFT Scalability Benchmark


Sparse BLAS and Sparse Solver

Performance Charts



Data Fitting Performance Charts


DCSRGEMV and DCSRMM
DCSRGEMV and DCSRMM performance benchmark

PARDISO Sparse Solver
PARDISO Sparse Solver performance benchmark

Natural cubic spline construction and interpolation
Natural cubic spline construction and interpolation Performance Benchmark


Random Number Generator Performance Charts



Vector Math Performance Chart



Application Benchmark Performance Chart


MCG31m1
Random Number Generator Performance Benchmark

VML exp()
VML exp() Function Performance Benchmark

Monte-Carlo option pricing performance benchmark
Monte-Carlo option pricing performance benchmark

Click on images for a larger view of the benchmark graphic.


Linear Algebra Performance Charts


Intel® Optimized SMP LINPACK
DGEMM Performance Benchmark

LU Factorization
LU Factorization Performance Benchmark

QR Factorization
QR Factorization Performance Benchmark

HPL LINPACK
HPL LINPACK

Cholesky Factorization
Cholesky Factorization Performance Benchmark

Matrix Multiply
Matrix Multiply Performance Benchmark


Application Benchmark Performance Chart



Batch 1D FFT Performance Chart



Black- Scholes Chart           


Monte Carlo Option Pricing
Monte Carlo Option Pricing Performance Benchmark

 
Batch 1D FFT Performance Chart

 
Black- Scholes Performance Benchmark

Videos to help you get started.

Register for future Webinars


Previously recorded Webinars:

  • Powered by MKL Accelerating NumPy and SciPy Performance with Intel® MKL- Python
  • Get Ready for Intel® Math Kernel Library on Intel® Xeon Phi™ Coprocessor
  • Beginning Intel® Xeon Phi™ Coprocessor Workshop: Advanced Offload Topics
  • Accelerating financial services applications using Intel® Parallel Studio XE with the Intel® Xeon Phi™ coprocessor

More Tech Articles

Intel® Cluster Tools Open Source Downloads
By Gergana Slavova (Intel)Posted 03/06/20140
This article makes available third-party libraries and sources that were used in the creation of Intel® Software Development Products. Intel provides this software pursuant to their applicable licenses. Products and Versions: Intel® Trace Analyzer and Collector for Linux* gcc-3.2.3-42.zip (whi...
Missing mpivars.sh error
By Gergana Slavova (Intel)Posted 09/24/20130
Problem: I have a system that has both the Intel® Compilers and the Intel® MPI Library.  I'm trying to run an Intel MPI job with mpirun but I'm hitting the following errors: /opt/intel/composer_xe_2013_sp1/mpirt/bin/intel64/mpirun:line 96: /opt/intel/composer_xe_2013_sp1/mpirt/bin/intel64/mpivar...
Using Multiple DAPL* Providers with the Intel® MPI Library
By James Tullos (Intel)Posted 09/19/20130
Introduction If your MPI program sends messages of drastically different sizes (for example, some 16 byte messages, and some 4 megabyte messages), you want optimum performance at all message sizes.  This cannot easily be obtained with a single DAPL* provider.  This is due to latency being a major...
Using Regular Expressions with the Intel® MPI Library Automatic Tuner
By James Tullos (Intel)Posted 07/10/20131
The Intel® MPI Library includes an Automatic Tuner program, called mpitune.  You can use mpitune to find optimal settings for both a cluster and for a specific application.  In order to tune a specific application (or to use a benchmark other than the default for a cluster-specific tuning), mpitu...

Páginas

Assine o

Supplemental Documentation

Intel® MKL 11.0 Release Notes
By adminPosted 07/10/20120
Release Notes for Intel MKL 11.0
Intel® MKL in depth training
By adminPosted 06/25/20120
Introduction and functionalities of Intel MKL
Intel Guide for Developing Multithreaded Applications
By adminPosted 01/16/201224
The Intel® Guide for Developing Multithreaded Applications covers topics ranging from general advice applicable to any multithreading method to usage guidelines for Intel® software products to API-specific issues.
Configuring Intel® MKL in Microsoft* Visual Studio*
By Naveen Gv (Intel)Posted 11/08/20110
Intel MKL in Microsoft Visual Studio
Assine o

You can reply to any of the forum topics below by clicking on the title. Please do not include private information such as your email address or product serial number in your posts. If you need to share private information with an Intel employee, they can start a private thread for you.

New topic    Search within this forum     Subscribe to this forum


Intel® Math Kernel Library 11.1 Update 2 is now available
By Sridevi (Intel)0
Intel® Math Kernel Library (Intel® MKL) is a highly optimized, extensively threaded, and thread-safe library of mathematical functions for engineering, scientific, and financial applications that require maximum performance. The Intel MKL 11.1 Update 2 packages are now ready for download. Intel MKL is available as a stand-alone product and as a part of the Intel® Parallel Studio XE 2013 SP1 , Intel® C++ Studio XE 2013 SP1 , Intel® Composer XE 2013 SP1 , Intel® Fortran Composer XE 2013 SP1 , and Intel® C++ Composer XE 2013 SP1 . Please visit the Intel® Software Evaluation Center to evaluate this product. Intel® MKL 11.1 Update 2 Bug fixes What's New in Intel® MKL 11.1 Update 2 : Release Notes CODE TIPS Important Notices: Intel MKL 11.2 Deprecations Look into the details of the new feature Sparse Matrix Vector Multiply Format Prototype Package (Intel® MKL SpMV Format Prototype Package)on the Intel® Xeon Phi™ coprocessor Look into Intel MKL New Install Option Check out Intel MKL Sup...
New Intel Premier Support
By Sridevi (Intel)0
The New Intel(R) Premier Support (IPS) is coming August 19th We are excited to announce the launch of a new version of Intel® Premier Support that replaces the existing Intel Premier Support web site and contains many new features and an updated user interface as highlighted below.  The new tool will allow improved collaboration both internally and externally, providing more efficient issue resolution and a better end user experience for customers like you.    Preparation Activities Open Issues Transferred - All current, open issues will be transferred over to the new Intel® Premier Support system prior to the launch. Closed issues will be transferred by the end of Q4 2013.  In the meantime, you will be able to access closed issues through the old Intel Premier Support web site with a new URL that will be published here. System Maintenance – We will require approximately 4 days of system maintenance in order to facilitate the launch.  During this time, Intel Premier Support will be ...
Can I use MKL from Java,from C# or from Python?
By Ying H (Intel)6
Can I use MKL from Java, From C# or from Python or from ......? Many developers may ask the first question when learn Intel MKL. The answer isYES. Then a following question is how to? Here is briefintrodution about how to use:Intel MKL is composed of a set of libraries, which support C/C++ and Fortran language. When use MKL from other languages like Java, from C# or from Python, the common method is to build a custom dll based onthe set of mkl libraries. (A build tool is available in the tools/builder sub-directory of the Intel MKL package) But since Intel MKL 10.3,we introduce a new dynamic library- the Single Dynamic Library interface (SDL interface) mkl_rt.so, which removes the need to create your own custom library. For example, # Load the share librarymkl= cdll.LoadLibrary("./libmkl_rt.so") # prior to version 10.3, you may use the created.so as below# mkl = dll.LoadLibrary("./libmy_mkl_py.so") Moreover, the SDL library allows us link mkl by single library and dynamically sel...
Linking Intel MKL is easy
By TODD R. (Intel)29
icl prog.c /Qmkl or, ifort prog.f /Qmkl That's the easiest way if you are using one of the latest Intel compilers on Windows*. There are similar compiler options for Linux* and Mac OS* X as well. Another easy way is to use our new dynamic linking model which requires a link to just one library. Add mkl_rt.lib to your Windows* link line or add -lmkl_rt to your Linux* or Mac OS* X link line. These new options willwork for the cases usedby most users. Those who use less common interfaces or threading models may still want to visit the Link Line Advisor to find the right set of libraries.
Documentation on using MKL With VxWorks?
By afishintel0
Hello,I've seen articles on the web site which state that the Linux version of the MKL can be used with VxWorks.  Is there a document that describes how to do this?Thanks!
MKL POISSON LIBRARY ERROR
By Apiwat W.11
I am numerically solving Poisson equation using Poisson Solver Routines https://software.intel.com/en-us/node/471042 . The equation has both Neumann and periodic boundary conditions. These boundary condition will go into the parameter 'BCtype.'  In my code, the library seems to not allow me to solve Poisson equation with periodic boundary condition. After I run my code, I received this error message: "MKL POISSON LIBRARY ERROR: Parameter ipar[3]=-197 is out of admissible range {0,...,15} Probably, it was altered by mistake outside of the routine, or some characters in the parameter 'BCtype' were out of admissible range {D,N} during initialization stage. Computations has been stopped The result may be wrong. " Is it possible to solve Poisson equation with the periodic boundary condition? According to the above webpage, it is legitimate. But it fails in practice. 
Number of licenses required for MKL?
By Adrienne S.2
We're trying to determine how many MKL licenses are required for our system setup. We would only be using MKL through R, so after R is built with MKL, I don't think we would not be directly calling MKL routines. R can create a shared library (libR.so) that links to the MKL .so files. We have three different systems, each with their own local build of R. Only one person compiles and installs R on the three different systems, and the resulting installations of R are used by ~20 people. My understanding is that we would need 1 single-user license for each system, since only one person would install R+MKL. However, one of these systems is actually a cluster with ~1000 nodes. Is 1 license sufficient for the cluster, assuming some of the 20 users are running multiple instances of R on the cluster nodes, if the R installation is built with MKL by one user? Finally, some R packages have components that need to be compiled when they're installed. Do we need additional licenses for each user ...
Math Kernel Library Link Line Advisor unable access
By oraclefans7
https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor  now unable access!

Páginas

Assine o Fóruns

Choose a topic:

  • Can I redistribute the Intel Math Kernel Library with my application?
  • Yes. When you purchase Intel MKL, you receive rights to redistribute computational portions of Intel MKL with your application. The evaluation versions of Intel MKL do not include redistribution rights. The list of files that can be redistributed is provided in redist.txt included in the Intel MKL distribution with product license.

  • Are there royalty fees for using Intel MKL?
  • No. There is no per copy royalty fee. Check the Intel MKL end user license agreement (EULA) for more details.

  • What files am I allowed to redistribute?
  • In general, the redistributable files include the linkable files (.DLL and .LIB files for Windows*, .SO and .A files for Linux*). With your purchase of Intel MKL (and updates through the support service subscription), you receive the redist.txt file which outlines the list of files that can be redistributed. The evaluation versions of Intel MKL do not include redistribution rights. See EULA for all terms.

  • Is there a limit to the number of copies of my application that I can ship which include Intel MKL redistributables?
  • You may redistribute an unlimited number of copies of the files that are found in the directories defined in the Redistributables section of the EULA.

  • How many copies of Intel MKL do I need to secure for my project team or company?
  • The number of Intel MKL copies that you need is determined by the number of developers who are writing code, compiling, and testing using Intel MKL API, For example, five developers in an organization working on building code with Intel MKL will require five Intel MKL licenses. View the EULA for complete details.

  • Do I need to get a license for each machine being used to develop and test applications using Intel MKL library?
  • The number of licenses for Intel MKL that you need are determined by the number of developers and build machines that may be in simultaneous use in your organization. These can be deployed on any number of machines on which the application is built and/or tested as long as there is only the number of licensed copies in use at any given time. For example a development team of five developers using ten machines simultaneously for development and test activities with Intel MKL, will be required to get ten licenses of Intel MKL. View the EULA for complete details.

  • Do I need to buy an Intel MKL license for each copy of our software that we sell?
  • No, there is no royalty fee for redistributing Intel MKL files with your software. By licensing Intel MKL for your developers, you have rights to distribute the Intel MKL files with your software for an unlimited number of copies. For more information, please refer to the EULA.

  • Where can I view the Intel MKL license agreement before making a decision to purchase the product?
  • The number of copies of Intel MKL that you need is determined by the number of developers who are writing code, compiling, and testing using the Intel MKL API, as well as the number of build machines involved in compiling and linking, which need the full Intel MKL development tools file set. See EULA for all terms.

Intel® Math Kernel Library 11.1

Getting Started?

Click the Learn tab for guides and links that will quickly get you started.

Get Help or Advice

Search Support Articles
Forums - The best place for timely answers from our technical experts and your peers. Use it even for bug reports.
Support - For secure, web-based, engineer-to-engineer support, visit our Intel® Premier Support web site. Intel Premier Support registration is required.
Download, Registration and Licensing Help - Specific help for download, registration, and licensing questions.

Resources

Release Notes - View Release Notes online!
Fixes List - View Compiler Fixes List

Documentation:
Reference Manual
Linux* | Windows* | OS X*
Documentation for other software products

**Source: Evans Data Software Developer surveys 2011-2013

Top Features