Performance Tools for Software Developers - Building Open MPI* with the Intel® compilers

 

Introduction
This guide is intended to help Intel® compiler customers build and use Open MPI* library. Open MPI is a standards-compliant, open-source implementation of the Message Passing Interface, a library specification for parallel processes or threads to exchange data in a parallel application.

 

Version information
The version of the Open MPI: 1.8.3
The version of Intel C++ and Fortran Compilers for Linux* or Mac OS* X: 15.0

Application notes
Open MPI is a standards-compliant, open-source implementation of the Message Passing Interface, a library specification for parallel processes or threads to exchange data in a parallel application. The target environment for Open MPI can vary dramatically based on interconnect, adapters, node types, batch subsystem, etc. Please read all relevant documentation for your target system.

This application note demonstrates the framework for building Open MPI with the Intel compilers but does NOT claim to represent all possible configurations and variations of the build for all possible target environments.

Obtaining the source code
Open MPI is obtained from the Open MPI website, www.open-mpi.org† . Please review their licensing and download instructions for access to this code.

Obtaining the latest version of Intel C++ Compiler and Intel Fortran Compiler
Licensed users of the Intel compilers may download the most recent versions of the compiler from the Download Center: Registration Center.
Others can download the evaulation copy from https://software.intel.com/en-us/articles/try-buy-tools.

Prerequisites
Hardware: This note applies to users with stand-alone computers with two or more cores or distributed-memory clusters.

Software: This note applies to Open MPI built on Linux (32 and 64 bit) and Mac OS* X using the 15.0 version of the Intel compilers. Please consult the Open MPI†  website for a complete list of supported platforms and operating systems.

Configuration and set-up information
Open MPI uses an Autoconf "configure" script to determine the build environment and tools and create the necessary build configuration. The method to build Open MPI, open a Mac OS* Terminal window or a Linux* shell:

gunzip -c openmpi-1.8.3.tar.gz | tar xf -
cd openmpi-1.8.3
./configure --prefix=/usr/local CC=icc CXX=icpc FC=ifort
... output of configure ...
make all install
... output of build and installation ...

As shown above, the configure options CC, CXX, and FC are used to specify which compilers are used to build Open MPI. The example shown above uses both the Intel C++ Compiler ( CC=icc CXX=icpc ) and the Intel Fortran Compiler ( FC=ifort ). Note that the Intel C++ compiler driver is named ' icpc'. Do NOT use ' icc' as the C++ compiler. The Intel compilers are GNU compatible, thus you may mix and match the Intel compilers with GNU compilers for C++ and Fortran. However, the mixing of GNU compilers with Intel compilers has not been tested with this application.

The configure script has a lot of possible options. To learn all of the possible configuration options, see the help provided in the output of "./configure --help" or read the contents of the file README provided in the OpenMPI tar archive file.

Specifying Intel compiler options in the configuration and build
To pass options to the compilers, the following command options to configure are supported:

CFLAGS= arguments to pass to the C compiler ( icc)
CXXFLAGS= arguments to pass to the C++ compiler ( icpc)
FCFLAGS= arguments to pass to the Fortran 90 compiler ( ifort)

Please see the Intel Compiler Documentation for appropriate options. In general, MPI performance is dominated by interconnect fabric latency and to a less extent on bandwidth. A general rule of thumb is to use the default compiler optimizations and avoid overly aggressive optimizations.

The default optimization with the Intel Compilers, that is if NO compiler option for -O is specified, is -O2. The -O2 optimizations provide a good level of optimization and safety for both the Intel C++ compiler and the Intel Fortran Compiler. The Open MPI build defaults have a reasonable optimization level.

Building MPI applications
Once Open MPI is installed, set your PATH environment variable to point to the <installation directory>/bin directory. Remember, from above, the <installation directory> is set by the --prefix argument you passed to configure. Typically this is /usr/local. Thus, for bash shell on Linux* and Mac* OS, one would:

export PATH=/usr/local/bin:${PATH}
export LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH}


and on Mac OS X you also should set:
export DYLD_LIBRARY_PATH=/usr/local/lib:${DYLD_LIBRARY_PATH

You should consider making this part of your default user environment by adding this export statement to your ~/.profile startup file.

Once that is done, to compile MPI application you use the Open MPI compiler wrappers.
The C wrapper is named mpicc.
The C++ wrapper has 3 equivalent wrappers, mpicxx, mpiCC, or mpic++. Use any one of these.
The Fortran 77 wrapper is name mpif77.
The Fortran 90 wrapper is named mpif90.
These compiler wrappers will invoke the Intel compilers and link all necessary Open MPI libraries.

example:

mpicc -o mpi_pong mpi_pong.c

See the Open MPI FAQ information on building applications for more extensive details and information.

Running an application under Open MPI
Open MPI provides command mpirun and mpiexec to launch parallel applications. Below is a simple example to run a 2 process MPI application:

mpirun -np 2 ./mpi_pong

There are many possible options to mpirun and mpiexec, as well as interactions with batch scheduling systems such as PBS. Please see the Open MPI FAQ for all the specifics for your system or cluster.

Benefits
This aritcle shows the steps on how to build Open MPI* using the Intel Compilers.

Known issues and limitations
At this time, there are no known issues with using the Intel Compilers to build Open MPI. Please see the Open MPI†  website for all known issues and limitations.

This link will take you off of the Intel Web site. Intel does not control the content of the destination Web Site.

 

Optimization Notice

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804

For more complete information about compiler optimizations, see our Optimization Notice.