Intel® MPI Library

Making applications perform better on Intel® architecture-based clusters with multiple fabric flexibility

  • Performance Optimized MPI Library
  • Sustained Scalability – Low Latencies, Higher Bandwidth & Increased Processes
  • Full Hybrid Support for multicore & manycore systems

$499.00
Buy Now

Or Download a Free 30-Day Evaluation Version

Deliver Flexible, Efficient, and Scalable Cluster Messaging

Intel® MPI Library 4.1 focuses on making applications perform better on Intel® architecture-based clusters—implementing the high performance Message Passing Interface Version 2.2 specification on multiple fabrics. It enables you to quickly deliver maximum end user performance even if you change or upgrade to new interconnects, without requiring changes to the software or operating environment.

Use this high performance MPI message library to develop applications that can run on multiple cluster interconnects chosen by the user at runtime. Benefit from a free runtime environment kit for products developed with Intel® MPI library. Get excellent performance for enterprise, divisional, departmental, workgroup, and personal High Performance Computing.


Intel® MPI Library (Intel® MPI) provides reduced MPI latency which can result in faster throughput.
Click to enlarge

Top Features

  • Scalability Up To 120K Processes
  • Industry Leading Latency Performance
  • Interconnect Independence & Flexible Runtime Fabric Selection


Quotes

“Fast and accurate state of the art general purpose CFD solvers is the focus at S & I Engineering Solutions Pvt, Ltd. Scalability and efficiency are key to us when it comes to our choice and use of MPI Libraries. The Intel® MPI Library has enabled us to scale to over 10k cores with high efficiency and performance.”
Nikhil Vijay Shende, Director,
S & I Engineering Solutions, Pvt. Ltd.

Scalability

  • Scaling up to 120k Processes
  • Low overhead allows random access to portions of a trace, making it suitable for analyzing large amounts of performance data.
  • Thread safety allows you to trace multithreaded MPI applications for event-based tracing as well as non-MPI threaded applications.
  • Improved start scalability through the mpiexec.hydra process manager

Industry Leading MPI Library
Click to enlarge

Industry Leading MPI Library
Click to enlarge

Performance

  • Low latency MPI implementation up to 6.5 as fast as alternative MPI libraries
  • Deploy optimized shared memory dynamic connection mode for large SMP nodes
  • Increase performance with improved DAPL and OFA fabric support
  • Accelerate your applications using the enhanced tuning utility for MPI
FeatureBenefit
Increased MPI Performance and Scalability

New connection manager and auto-selection methods increase scalability over RDMA-based interconnects.  Improved support for NUMA applications and addition of advanced process pinning controls allow development and deployment for continued capacity growth of HPC systems.

Extended Scalability on Windows*

The highly scalable Hydra Process Manager is now available for Windows*-based clusters.  Use mpiexec.hydra for enabling low-latency RDMA devices through Microsoft’s Network Direct* interface.

Extended support for the Intel® Xeon Phi™ Coprocessor

Native port of the Tag Matching Interface (TMI) over the Qlogic* PSM fabric.  Extending support for Checkpoint/Restart (BLCR*) on the Intel® Xeon Phi™ coprocessor.

Latest Processor Support
Haswell, Ivy Bridge, Intel® Many Integrated Core Architecture

Intel consistently offers the first set of tools to take advantage of the latest performance enhancements in the newest Intel product, while preserving compatibility with older Intel and compatible processors. New support includes AVX2, TSX, FMA3 and AVX-512.

Scalability

Implementing the high performance version 2.2 of the MPI-2 specification on multiple fabrics, Intel® MPI Library 4.1 for Windows* and Linux* focuses on making applications perform better on IA-based clusters. Intel® MPI Library 4.1 enables you to quickly deliver maximum end-user performance, even if you change or upgrade to new interconnects without requiring major modifications to the software or to the operating environment. Intel also provides a free runtime environment kit for products developed with the Intel® MPI library.

Performance

Optimized shared memory path for multicore platforms allows more communication throughput and lower latencies. Native InfiniBand interface (OFED verbs) also provides support for lower latencies. Multi-rail capability for higher bandwidth and increased interprocess communication and Tag Matching Interface (TMI) support for higher performance on Qlogic* PSM and Myricom* MX interconnects.

Intel® MPI Library 4.1 Supports Multiple Hardware Fabrics

Whether you need to run TCP sockets, shared memory, or one of many Remote Direct Memory Access (RDMA) based interconnects, including InfiniBand*, Intel® MPI Library 4.1 covers all your configurations by providing an accelerated universal, multi-fabric layer for fast interconnects via the Direct Access Programming Library (DAPL*) or the Open Fabrics Association (OFA*) methodology. Develop MPI code independent of the fabric, knowing it will run efficiently on whatever fabric is chosen by the user at runtime.

Additionally, Intel® MPI Library 4.1 provides new levels of performance and flexibility for applications achieved through improved interconnect support for Myrinet* MX and QLogic* PSM interfaces, faster on-node messaging and an application tuning capability that adjusts to the cluster architecture and application structure.

Intel® MPI Library 4.1 dynamically establishes the connection, but only when needed, which reduces the memory footprint. It also automatically chooses the fastest transport available. Memory requirements are reduced by several methods including a two-phase communication buffer enlargement capability which allocates only the memory space actually required.

Purchase Options

Several suites are available combining the tools to build, verify and tune your application. The products covered in this product brief are highlighted in blue. Named-user or multi-user licenses along with volume, academic, and student discounts are available.


Click to enlarge

Technical Specifications

Feature Benefit
Processor support

Validated for use with multiple generations of Intel® and compatible processors including but not limited to: 2nd Generation Intel® Core™2 processor, Intel® Core™2 processor, Intel® Core™ processor, Intel® Xeon™ processor, and Intel® Xeon Phi Coprocessors

Operating systems

Windows* and Linux*

Programming languages

Natively supports C, C++ and Fortran development

System requirements

Please refer to www.intel.com/software/products/systemrequirements/ for details on hardware and software requirements.

Support

A free Runtime Environment Kit is available to run applications that were developed using Intel® MPI Library

All product updates, Intel® Premier Support services, and Intel® Support Forums are included for one year. Intel Premier Support gives you confidential support, technical notes, application notes, and the latest documentation. Join the Intel® Support Forums community to learn, contribute, or just browse! http://software.intel.com/en-us/forums

Try Tools from Intel

Download a free 30-day evaluation from: http://intel.ly/sw-tools-eval. Click on ‘Cluster Tools’ link.

Scalability

Implementing the high performance version 2.2 of the MPI-2 specification on multiple fabrics, Intel® MPI Library 4.1 for Windows* and Linux* focuses on making applications perform better on IA-based clusters. Intel® MPI Library enables you to quickly deliver maximum end-user performance, even if you change or upgrade to new interconnects without requiring major modifications to the software or to the operating environment. Intel also provides a free runtime environment kit for products developed with the Intel® MPI library.

Performance

Optimized shared memory path for multicore platforms allows more communication throughput and lower latencies. Native InfiniBand interface (OFED verbs) also provides support for lower latencies. Multi-rail capability for higher bandwidth and increased interprocess communication and Tag Matching Interface (TMI) support for higher performance on Qlogic* PSM and Myricom* MX interconnects.

Intel® MPI Library 4.1 Supports Multiple Hardware Fabrics

Whether you need to run TCP sockets, shared memory, or one of many Remote Direct Memory Access (RDMA) based interconnects, including InfiniBand*, Intel® MPI Library 4.1 covers all your configurations by providing an accelerated universal, multi-fabric layer for fast interconnects via the Direct Access Programming Library (DAPL*) or the Open Fabrics Association (OFA*) methodology. Develop MPI code independent of the fabric, knowing it will run efficiently on whatever fabric is chosen by the user at runtime.

Additionally, Intel® MPI Library 4.1 provides new levels of performance and flexibility for applications achieved through improved interconnect support for Myrinet* MX and QLogic* PSM interfaces, faster on-node messaging and an application tuning capability that adjusts to the cluster architecture and application structure.

Intel® MPI Library 4.1 dynamically establishes the connection, but only when needed, which reduces the memory footprint. It also automatically chooses the fastest transport available. Memory requirements are reduced by several methods including a two-phase communication buffer enlargement capability which allocates only the memory space actually required.

Purchase Options

Several suites are available combining the tools to build, verify and tune your application. The products covered in this product brief are highlighted in blue. Single or multi-user licenses along with volume, academic, and student discounts are available.


Click to enlarge

Technical Specifications

FeatureBenefit
Processor support

Validated for use with multiple generations of Intel® and compatible processors including but not limited to: 2nd Generation Intel® Core™2 processor, Intel® Core™2 processor, Intel® Core™ processor, Intel® Xeon™ processor, and Intel® Xeon Phi Coprocessors

Operating systems

Windows* and Linux*

Programming languages

Natively supports C, C++ and Fortran development

System requirements

Please refer to www.intel.com/software/products/systemrequirements/ for details on hardware and software requirements.

Support

A free Runtime Environment Kit is available to run applications that were developed using Intel® MPI Library

All product updates, Intel® Premier Support services, and Intel® Support Forums are included for one year. Intel Premier Support gives you confidential support, technical notes, application notes, and the latest documentation. Join the Intel® Support Forums community to learn, contribute, or just browse! http://software.intel.com/en-us/forums

Videos to help you get started.

Register for future Webinars


Previously recorded Webinars:

  • Profiling MPI Communications - Tips and Techniques for High Performance
  • Increase Cluster MPI Application Performance with a "MPI Tune" Up
  • MPI on Intel® Xeon Phi™ coprocessor

More Tech Articles

Intel® Cluster Tools Open Source Downloads
By Gergana Slavova (Intel)Posted 03/06/20140
This article makes available third-party libraries and sources that were used in the creation of Intel® Software Development Products. Intel provides this software pursuant to their applicable licenses. Products and Versions: Intel® Trace Analyzer and Collector for Linux* gcc-3.2.3-42.zip (whi...
Missing mpivars.sh error
By Gergana Slavova (Intel)Posted 09/24/20130
Problem: I have a system that has both the Intel® Compilers and the Intel® MPI Library.  I'm trying to run an Intel MPI job with mpirun but I'm hitting the following errors: /opt/intel/composer_xe_2013_sp1/mpirt/bin/intel64/mpirun:line 96: /opt/intel/composer_xe_2013_sp1/mpirt/bin/intel64/mpivar...
Using Multiple DAPL* Providers with the Intel® MPI Library
By James Tullos (Intel)Posted 09/19/20130
Introduction If your MPI program sends messages of drastically different sizes (for example, some 16 byte messages, and some 4 megabyte messages), you want optimum performance at all message sizes.  This cannot easily be obtained with a single DAPL* provider.  This is due to latency being a major...
Using Regular Expressions with the Intel® MPI Library Automatic Tuner
By James Tullos (Intel)Posted 07/10/20131
The Intel® MPI Library includes an Automatic Tuner program, called mpitune.  You can use mpitune to find optimal settings for both a cluster and for a specific application.  In order to tune a specific application (or to use a benchmark other than the default for a cluster-specific tuning), mpitu...

Pages

Subscribe to

Supplemental Documentation

Using Multiple DAPL* Providers with the Intel® MPI Library
By James Tullos (Intel)Posted 09/19/20130
Introduction If your MPI program sends messages of drastically different sizes (for example, some 16 byte messages, and some 4 megabyte messages), you want optimum performance at all message sizes.  This cannot easily be obtained with a single DAPL* provider.  This is due to latency being a major...
Intel® MPI Library 4.1 Update 1 Readme
By Gergana Slavova (Intel)Posted 06/07/20130
The Intel® MPI Library for Linux* and Windows* is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v2.2 (MPI-2.2) specification. This package is for MPI users who develop on and build for IA-32 and Intel® 64 archit...
Enabling Connectionless DAPL UD in the Intel® MPI Library
By Gergana Slavova (Intel)Posted 05/07/20131
What is DAPL UD? Traditional InfiniBand* support involves MPI message transfer over the Reliable Connection (RC) protocol. While RC is long-standing and rich in functionality, it does have certain drawbacks: since it requires that each pair of processes setup a one-to-one connection at the start ...
Controlling Process Placement with the Intel® MPI Library
By James Tullos (Intel)Posted 04/01/20130
When running an MPI program, process placement is critical to maximum performance.  Many applications can be sufficiently controlled with a simple process placement scheme, while some will require a more complex approach.  The Intel® MPI Library offers multiple options for controlling process pla...

Pages

Subscribe to

You can reply to any of the forum topics below by clicking on the title. Please do not include private information such as your email address or product serial number in your posts. If you need to share private information with an Intel employee, they can start a private thread for you.

New topic    Search within this forum     Subscribe to this forum


Run Intel MPI without mpirun/mpiexec
By Jackey Y.1
Hi, I am wondering does Intel MPI support a MPI run without mpirun/mpiexec in the command line? I know that in MPI-2 standard, it supports the “dynamic process” feature, i.e., dynamically generate/spawn processes from existing MPI process. What I am trying to do here is 1) Firstly, launch a singleton MPI process without mpirun/mpiexec in the command line; 2) Secondly, use MPI_Comm_spawn to spawn a set of process on the different host machines. I tried to do that, but it seems that the Intel MPI cannot find the host file. Because I did not use mpirun in the command line, I used environment variable I_MPI_HYDRA_HOST_FILE to set the host file. But, still it seems it cannot find the host file. Any idea? Here is my package info: Package ID: l_mpi_p_4.1.3.049 Package Contents: Intel(R) MPI Library for Linux* OS   Thanks,   Jackey
Difference between mpicc and mpiicc
By Fuli F.7
I write a simple mpi program as follow: #include "mpi.h" #include <stdio.h> #include <math.h> void main(argc,argv) int argc; char *argv[]; {     int myid,numprocs;     int namelen;     char pro_name[MPI_MAX_PROCESSOR_NAME];     MPI_Init(&argc,&argv);     MPI_Comm_rank(MPI_COMM_WORLD,&myid);     MPI_Comm_size(MPI_COMM_WORLD,&numprocs);     MPI_Get_processor_name(pro_name,&namelen);     printf("Process %d of %d on %s\n",myid,numprocs,pro_name);     MPI_Finalize(); } When I compile it with "mpicc -o xxx xxx.c" and run it with "mpirun -np 8 ./xxx", it rightly creates 8 processes. But when I compile it with "mpiicc -o xxx xxx.c" and run with the same order as above, it only creates 1 process. I want to know what's the difference between the mpicc and mpiicc. Is it caused by some faults made during my installment? And how can I fix it? By the way, I install the impi and compiler of intel by installing the intel cluster studio (l_ics_2013.1.0...
What/where is DAPL provider libdaplomcm.so.2 ?
By Beaver66752
DAPL providers ucm, scm are frequently mentioned, but what is libdaplomcm.so.2? Could someone point me to a description of the use case for the DAPL provider libdaplomcm.so.2? I am currently using the Intel MPI Library 4.1 for Linux with Mellanox OFED 2.1; shm:dapl and shm:ofa both seem to work, but with shm:dapl I get warning messages about not being able to find libdaplomcm.so.2. Mellanox DAPL does have this file. This file does not appear in DAPL 2.0.41 either: http://www.openfabrics.org/downloads/dapl/ I found the file in MPSS 3.2; can I just drop this file into a Mellanox 2.1 /usr/lib64 installation?
IMPI dapl fabric error
By san5
Hi, I'm trying to run HPL benchmark on an Ivybridge Xeon processor with 2 Xeon Phi 7120P MIC cards. I'm using offload xhpl binary from Intel Linpack. It throws following error $ bash runme_offload_intel64 This is a SAMPLE run script.  Change it to reflect the correct number of CPUs/threads, number of nodes, MPI processes per node, etc.. MPI_RANK_FOR_NODE=1 NODE=1, CORE=, MIC=1, SHARE= MPI_RANK_FOR_NODE=0 NODE=0, CORE=, MIC=0, SHARE= [1] MPI startup(): dapl fabric is not available and fallback fabric is not enabled [0] MPI startup(): dapl fabric is not available and fallback fabric is not enabled I checked the same errors on this forum and got to know that to unset I_MPI_DEVICES variable. This made the HPL to run. But performance is very low, just 50%. On the other node, with same hardware, HPL efficiency is 84%. Following is the short output of openibd status from both systems, which shows the difference. ON NODE with HPL 84%                                                 ON ...
Memory Leak detected by Inspector XE in MPI internal buffer
By burnesrr2
I am interested in finding out if there is a way to configure Intel's MPI libraries to alter what the threshold is for the creation of internal buffers so I can verify the source of a memory leak detected by Inspector XE. Please refer to my post in Intel's Inspector XE forum, which includes a simple Fortran program that demonstrates the issue: http://software.intel.com/en-us/forums/topic/508656 It appears once an MPI operation is sending or receiving more than 64K of information an internal buffer may be created and Inspector is reporting a memory leak when that happens. I am hoping there is a way to configure the MPI libraries to alter the behavior of the creation and destruction of internal buffers so I can confirm the source of the reported memory leak. I am hoping someone here in the MPI forums has a suggestion of a way to do this. Even reducing the size of the data transfer that triggers the generation of internal buffers would be helpful. I am reluctant to just write this off ...
How to free a MPI communicator created w MPI_Comm_spawn
By Florentino S.4
Hi, I'm trying to free a communicator created with this call: int MPI_Comm_spawn(char *command, char *argv[], int maxprocs,    MPI_Info info, int root, MPI_Comm comm,    MPI_Comm *intercomm, int array_of_errcodes[]) <-- The comunicator created it's intercommAs far as I know, according to the standard, MPI_Free is a collective operation, although they suggest to implement it locally, however on Intel MPI it's a collective operation (according to my own experience and to http://software.intel.com/sites/products/documentation/hpc/ics/itac/81/I... ). However I have a problem here, father/spawners process/es will have a communicator which contains his sons, and the spawned processes/sons will have the communicator which contains the masters. How I can free the communicator of the master with this layout? I know that I can create a new communicator with both sons and masters and free with that, but then that won't be the same communicator that I want to free. Thanks beforehand,
MPI doesn't work (Fatal error in MPI_Init)
By Ivan I.1
Hi, I have the following problem: I have two nodes and config file: -n 1 -host node0 myapp -n 1 -host node1 myappIn this way it works fine. However If I change the order of lines in config to: -n 1 -host node1 myapp -n 1 -host node0 myappIt fails with the error: Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(658)................: MPID_Init(195).......................: channel initialization failed MPIDI_CH3_Init(104)..................: MPID_nem_tcp_post_init(344)..........: MPID_nem_newtcp_module_connpoll(3102): gen_cnting_fail_handler(1816)........: connect failed - The semaphore timeout period has expired. (errno 121) job aborted: rank: node: exit code[: error message] 0: node1: 1: process 0 exited without calling finalize 1: node0: 123What can be the reason for? Any ideas?
Segfault in DAPL with Mellanox OFED 2.1
By Ben2
Hi, We're having a problem with the Intel MPI library crashing since we've updated to the latest Mellanox OFED 2.1. For example, the test program supplied with Intel MPI (test/test.f90) crashes with a segfault. I compiled it using mpif90 -debug all /apps/intel-mpi/4.1.1.036/test/test.f90 -o test.xand managed to get a back trace from the crash using idbc: #0 0x00007fcb9418f078 in ?? () from /apps/intel-mpi/4.1.1.036/intel64/lib/libmpi.so.4 #1 0x00007fcb94190bf7 in ?? () from /apps/intel-mpi/4.1.1.036/intel64/lib/libmpi.so.4 #2 0x00007fcb94191543 in MPID_nem_dapl_rc_init_20 () from /apps/intel-mpi/4.1.1.036/intel64/lib/libmpi.so.4 #3 0x00007fcb941de883 in MPID_nem_dapl_init () from /apps/intel-mpi/4.1.1.036/intel64/lib/libmpi.so.4 #4 0x00007fcb94276fc6 in ?? () from /apps/intel-mpi/4.1.1.036/intel64/lib/libmpi.so.4 #5 0x00007fcb9427547c in MPID_nem_init_ckpt () from /apps/intel-mpi/4.1.1.036/intel64/lib/libmpi.so.4 #6 0x00007fcb94276ca7 in MPID_nem_init () from /apps/intel-mpi/...

Pages

Subscribe to Forums

Licensing

  • What kind of licenses are available for the Intel® MPI Library 4.1?
  • The Runtime license includes everything you need to run Intel MPI-based applications. The license is free and permanent. The Developer license includes everything needed to build and run applications. It is fee-based and permanent. It allows free redistribution of the components needed to run Intel MPI-based applications.

  • When is a Developer license required for the Intel® MPI Library 4.1?
  • The two kits (developer and runtime) can co-exist on a machine and it is fine for customers of Intel MPI-based applications to relink the application to include user subroutines. If the customer is actually writing MPI code (calling MPI_* functions directly), then a Developer license would be needed.

  • I am an ISV and am planning to ship my product with Intel MPI Library. Do my customers have to buy the Intel MPI Library Development Kit in order to use my software?
  • No. There are currently 3 different models if ISVs want to ship with Intel MPI Library.
    1) An ISV can redistribute the runtime components of the Intel MPI Library available from the development kit (see the redist.txt file in the Intel MPI Library installation directory for list of redistributable files).
    2) If a customer would rather install the Intel MPI Library as a system component, the Runtime Environment Kit can be downloaded free of charge from the Intel MPI Library product page.
    3) The Intel MPI Runtime Library can be pre-installed by the vendor and shipped with the application.

Downloads

Compatibility

  • Does the Intel® MPI Library 4.1 support 32-bit applications on 64-bit operating systems?
  • Yes. The Intel® MPI Library 4.1 supports 32-bit apps on 64-bit operating systems on Intel® 64.

  • Can the Intel® MPI Library 4.1 handle a mixed 32/64-bit job?
  • No. The Intel MPI Library does not support these types of heterogeneous configurations. All ranks of the job must be either IA-32, Intel® 64, or Intel® Many Integrated Core (MIC) Architecture based.

  • Is there a Microsoft* Windows* version of the Intel® MPI Library 4.1?
  • Yes. The Intel MPI Library 4.1 for Windows is available now.

  • Does the Intel MPI Library run on AMD platforms?
  • Yes. The Intel® MPI Library 4.1 is known to run on AMD platforms, and we have had no issue reports specific to AMD platforms so far.

  • Does the Intel® MPI Library 4.1 support parallel I/O calls?
  • Yes. The parallel file I/O part of the MPI-2 standard is fully implemented by the Intel® MPI Library 4.1. Some of the currently supported file systems include Unix File System (UFS), Network File System (NFS), Parallel Virtual File System (PVFS2), and Lustre*.  For a complete list, check the Release Notes.

  • Does the Intel® MPI Library 4.1 support one-sided communication?
  • Yes. The Intel® MPI Library 4.1 supports both active target and passive target one-sided communication. The only exception is the passive target one-sided communication in case the target process does not call any MPI functions.

  • Does the Intel® MPI Library 4.1 support heterogeneous clusters?
  • Up to a certain extent.  The Intel® MPI Library 4.1 does not support clusters running different operating systems.  But it does support an environment of mixed Intel processors and provides some default optimizations depending on the detected architecture.

  • What DAPL* version does the Intel® MPI Library 4.1 support?
  • The Intel® MPI Library 4.1 uses Direct Access Programming Library (DAPL) as a fabric independent API to run on fast interconnects like InfiniBand* or Myrinet*. Currently the Intel MPI Library supports DAPL* version 1.1, 1.2 as well as DAPL* version 2.0-capable providers. Intel MPI automatically determines version of DAPL standard to which the provider conforms.

  • What compilers does the Intel® MPI Library 4.1 support?
  • The Intel® MPI Library 4.1 supports Intel® Compilers 11.1 through 12.1 (or higher), as well as GNU* C, C++, Fortran77 3.3 or higher, and GNU* Fortran95 4.0 or higher. Additionally, the Intel® MPI Library 4.1 provides an unbundled source kit that offers support for the PGI* C, PGI* Fortran 77, and Absoft* Fortran 77 compilers out of the box, with the following caveats:

    • Your PGI* compiled source files must not transfer long double entities
    • Your Absoft* based build procedure must use the -g77, -B108 compiler option
    • You must take care of installing and selecting the right compilers yourself
    • You must make sure that the respective compiler runtime is installed on all nodes

    You may have to build extra Intel® MPI binding libraries if you need support for PGI* C++, PGI* Fortran 95, and Absoft* Fortran 95 bindings. If you need access to this additional binding kit, contact us via the Intel® Premier Support portal @ http://premier.intel.com

  • Does the Intel® MPI Library 4.1 work with any common resource managers?
  • Yes. The Intel® MPI Library 4.1 supports OpenPBS*, PBS Pro*, Torque, and LSF* job schedulers. The simplified job startup command mpirun recognizes when it is run inside a session started by any PBS compatible resource manager (like OpenPBS*, PBS Pro*, Torque*), as well as LSF*. See the Intel® MPI Library 4.1 Reference Manual for a description of this command.

  • I have a mixed application which uses both MPI and OpenMP* calls. Does the Intel® MPI Library 4.1 support this type of hybrid functionality?
  • Yes, Intel MPI does support mixed MPI/OpenMP applications.

Technical

  • Is the Intel® MPI Library 4.1 fault-tolerant?
  • Yes, to an extent. Note that the MPI standard does not yet define proper handling of aborted MPI ranks. By default, the Intel® MPI Library 4.1 will stop the entire application if any of the processes exit abnormally. This behavior can be overwritten via a runtime option where the library does allow for an application to continue execution even if one of the processes stops responding. Check the Intel® MPI Library 4.1 Reference Manual for details and application requirements.

  • Is the Intel® MPI Library 4.1 thread safe?
  • Yes. The Intel® MPI Library 4.1 introduces thread safe libraries at level MPI_THREAD_MULTIPLE since version 3.0. Several threads can make the Intel MPI Library calls simultaneously. Use the compiler driver -mt_mpi option to link the thread safe version of the Intel MPI Library. Use the thread safe libraries if you request the thread support at the following levels:

    MPI_THREAD_FUNNELED,
    MPI_THREAD_SERIALIZED, or
    MPI_THREAD_MULTIPLE.

    The previous versions of the Intel MPI Library provide only MPI_THREAD_NONE and MPI_THREAD_FUNNELED levels in terms of the MPI-2 standard.

  • How to learn what version of the Intel® MPI Library is installed on the system?
  • Provided you run the Intel® MPI Library 2.0.x, try running mpiexec –V:

    mpiexec –V
    This will output version information.

    If this is an official package, look up the mpisupport.txt file or the Release Notes and search for a version information there:
    cat /opt/intel/mpi/2.0.1/mpisupport.txt

    If Intel MPI has been installed in RPM mode, try to query the RPM database:
    rpm –qa | grep intel-mpi

    Finally, for full build identification information, set I_MPI_VERSION to 1 and run any MPI program, grepping for "Build":
    mpiexec –n 2 –env ./a.out | grep –i build
    This will turn up a couple of lines with the build date. Most of this information is also imbedded into the library and can be queried using the strings utility:
    strings /opt/intel/mpi/2.0.1/lib/libmpi.so | grep –i build

Licensing

  • What kind of licenses are available for the Intel® MPI Library 4.1?
  • The Runtime license includes everything you need to run Intel MPI-based applications. The license is free and permanent. The Developer license includes everything needed to build and run applications. It is fee-based and permanent. It allows free redistribution of the components needed to run Intel MPI-based applications.

  • When is a Developer license required for the Intel® MPI Library 4.1?
  • The two kits (developer and runtime) can co-exist on a machine and it is fine for customers of Intel MPI-based applications to relink the application to include user subroutines. If the customer is actually writing MPI code (calling MPI_* functions directly), then a Developer license would be needed.

  • I am an ISV and am planning to ship my product with Intel MPI Library. Do my customers have to buy the Intel MPI Library Development Kit in order to use my software?
  • No. There are currently 3 different models if ISVs want to ship with Intel MPI Library.
    1) An ISV can redistribute the runtime components of the Intel MPI Library available from the development kit (see the redist.txt file in the Intel MPI Library installation directory for list of redistributable files).
    2) If a customer would rather install the Intel MPI Library as a system component, the Runtime Environment Kit can be downloaded free of charge from the Intel MPI Library product page.
    3) The Intel MPI Runtime Library can be pre-installed by the vendor and shipped with the application.

Compatibility

  • Does the Intel® MPI Library 4.1 support 32-bit applications on 64-bit operating systems?
  • Yes. The Intel® MPI Library 4.1 supports 32-bit apps on 64-bit operating systems on Intel® 64.

  • Can the Intel® MPI Library 4.1 handle a mixed 32/64-bit job?
  • No. The Intel MPI Library does not support these types of heterogeneous configurations. All ranks of the job must be either IA-32, Intel® 64, or Intel® Many Integrated Core (MIC) Architecture based.

  • Is there a Microsoft* Windows* version of the Intel® MPI Library 4.1?
  • Yes. The Intel MPI Library 4.1 for Windows is available now.

  • Does the Intel MPI Library run on AMD platforms?
  • Yes. The Intel® MPI Library 4.1 is known to run on AMD platforms, and we have had no issue reports specific to AMD platforms so far.

  • Does the Intel® MPI Library 4.1 support parallel I/O calls?
  • Yes. The parallel file I/O part of the MPI-2 standard is fully implemented by the Intel® MPI Library 4.1. Some of the currently supported file systems include Unix File System (UFS), Network File System (NFS), Parallel Virtual File System (PVFS2), and Lustre*.  For a complete list, check the Release Notes.

  • Does the Intel® MPI Library 4.1 support one-sided communication?
  • Yes. The Intel® MPI Library 4.1 supports both active target and passive target one-sided communication. The only exception is the passive target one-sided communication in case the target process does not call any MPI functions.

  • Does the Intel® MPI Library 4.1 support heterogeneous clusters?
  • Up to a certain extent.  The Intel® MPI Library 4.1 does not support clusters running different operating systems.  But it does support an environment of mixed Intel processors and provides some default optimizations depending on the detected architecture

  • What DAPL* version does the Intel® MPI Library 4.1 support?
  • The Intel® MPI Library 4.1 uses Direct Access Programming Library (DAPL) as a fabric independent API to run on fast interconnects like InfiniBand* or Myrinet*. Currently the Intel MPI Library supports DAPL* version 1.1, 1.2 as well as DAPL* version 2.0-capable providers. Intel MPI automatically determines version of DAPL standard to which the provider conforms.

  • What compilers does the Intel® MPI Library 4.1 support?
  • The Intel® MPI Library 4.1 supports Intel® Compilers 11.1 through 12.1 (or higher), as well as GNU* C, C++, Fortran77 3.3 or higher, and GNU* Fortran95 4.0 or higher. Additionally, the Intel® MPI Library 4.1 provides an unbundled source kit that offers support for the PGI* C, PGI* Fortran 77, and Absoft* Fortran 77 compilers out of the box, with the following caveats:

    • Your PGI* compiled source files must not transfer long double entities
    • Your Absoft* based build procedure must use the -g77, -B108 compiler option
    • You must take care of installing and selecting the right compilers yourself
    • You must make sure that the respective compiler runtime is installed on all nodes

    You may have to build extra Intel® MPI binding libraries if you need support for PGI* C++, PGI* Fortran 95, and Absoft* Fortran 95 bindings. If you need access to this additional binding kit, contact us via the Intel® Premier Support portal @ http://premier.intel.com

  • Does the Intel® MPI Library 4.1 work with any common resource managers?
  • Yes. The Intel® MPI Library 4.1 supports OpenPBS*, PBS Pro*, Torque, and LSF* job schedulers. The simplified job startup command mpirun recognizes when it is run inside a session started by any PBS compatible resource manager (like OpenPBS*, PBS Pro*, Torque*), as well as LSF*. See the Intel® MPI Library 4.1 Reference Manual for a description of this command.

  • I have a mixed application which uses both MPI and OpenMP* calls. Does the Intel® MPI Library 4.1 support this type of hybrid functionality?
  • Yes, Intel MPI does support mixed MPI/OpenMP applications.

Technical

  • Is the Intel® MPI Library 4.1 fault-tolerant?
  • Yes, to an extent. Note that the MPI standard does not yet define proper handling of aborted MPI ranks. By default, the Intel® MPI Library 4.1 will stop the entire application if any of the processes exit abnormally. This behavior can be overwritten via a runtime option where the library does allow for an application to continue execution even if one of the processes stops responding. Check the Intel® MPI Library 4.1 Reference Manual for details and application requirements.

  • Is the Intel® MPI Library 4.1 thread safe?
  • Yes. The Intel® MPI Library 4.1 introduces thread safe libraries at level MPI_THREAD_MULTIPLE since version 3.0. Several threads can make the Intel MPI Library calls simultaneously. Use the compiler driver -mt_mpi option to link the thread safe version of the Intel MPI Library. Use the thread safe libraries if you request the thread support at the following levels:

    MPI_THREAD_FUNNELED,
    MPI_THREAD_SERIALIZED, or
    MPI_THREAD_MULTIPLE.

    The previous versions of the Intel MPI Library provide only MPI_THREAD_NONE and MPI_THREAD_FUNNELED levels in terms of the MPI-2 standard.

  • How to learn what version of the Intel® MPI Library is installed on the system?
  • Provided you run the Intel® MPI Library 2.0.x, try running mpiexec –V:

    mpiexec –V
    This will output version information.

    If this is an official package, look up the mpisupport.txt file or the Release Notes and search for a version information there:
    cat /opt/intel/mpi/2.0.1/mpisupport.txt

    If Intel MPI has been installed in RPM mode, try to query the RPM database:
    rpm –qa | grep intel-mpi

    Finally, for full build identification information, set I_MPI_VERSION to 1 and run any MPI program, grepping for "Build":
    mpiexec –n 2 –env ./a.out | grep –i build
    This will turn up a couple of lines with the build date. Most of this information is also imbedded into the library and can be queried using the strings utility:
    strings /opt/intel/mpi/2.0.1/lib/libmpi.so | grep –i build

Intel® MPI Library 4.1

Getting Started?

Click the Learn tab for guides and links that will quickly get you started.

Get Help or Advice

Search Support Articles
Forums - The best place for timely answers from our technical experts and your peers. Use it even for bug reports.
Support - For secure, web-based, engineer-to-engineer support, visit our Intel® Premier Support web site. Intel Premier Support registration is required.
Download, Registration and Licensing Help - Specific help for download, registration, and licensing questions.

Resources

Release Notes - View Release Notes online!
Intel® MPI Library Product Documentation  - View documentation online!
Documentation for other software products