Product limitations


Special features and known issues

  • The Intel® MPI Library Development Kit package is layered on top of the Runtime Environment package. See the Intel® MPI Library for Linux Installation Guide for more details.

  • The default installation path for the Intel® MPI Library changed to /opt/intel/impi/3.2. Installer, if necessary, establish a symbolic link from the expected default RTO location to the actual RTO or SDK installation location.

  • The Intel MPI® Library automatically places consecutive MPI processes to all processor cores. Use the I_MPI_PIN and related environment variables to control process pinning. See the Intel® MPI Library Reference Manual for more details.

  • The Intel® MPI Library provides thread safe libraries at level MPI_THREAD_MULTIPLE. Follow these rules:

    • Use the Intel MPI compiler drivers -mt_mpi option to build a thread safe MPI application

    • Do not load thread safe Intel MPI libraries through dlopen(3)

  • To run a mixed Intel MPI/OpenMP* application do the following:

    • Use the thread safe version of the Intel® MPI Library by using the -mt_mpi compiler driver option

    • Set I_MPI_PIN_DOMAIN to select desired process pinning scheme. The recommended setting is I_MPI_PIN_DOMAIN=omp

    • I_MPI_PIN_DOMAIN does not affect the Itanium® based SGI* Altix* systems

    • See the Intel® MPI Library Reference Manual for more details

  • Intel® MKL 10.0 may create multiple threads depending on various conditions. Follow the rules to correctly use Intel® MKL:

    • (SDK only) Use thread safe version of the Intel® MPI Library in conjunction with Intel® MKL by using the -mt_mpi compiler driver option

    • Set OMP_NUM_THREADS environment variable to 1 to run application linked against non thread safe version of the Intel® MPI Library

  • The Intel® MPI Library uses dynamic connection establishment by default for 64 and more processes. To always establish all connections upfront, set the I_MPI_USE_DYNAMIC_CONNECTIONS environment variable to "disable".

  • Intel® MPI Library compiler drivers embed the actual Development Kit library path (default /opt/intel/impi/<version>.<package_num>) and the default Runtime Environment library path (/opt/intel/mpi-rt/<version>.<package_num>) into the executables using the -rpath linker option.

  • Use the LD_PRELOAD environment variable to preload appropriate Intel® MPI binding library to start MPICH2* FORTRAN application in the Intel® MPI Library environment.

  • The Intel® MPI Library enhances message-passing performance on DAPL*-based interconnects by maintaining a cache of virtual-to-physical address translations in the MPI DAPL* data transfer path.

    Set the environment variable LD_DYNAMIC_WEAK to "1" if your program dynamically loads the standard C library before dynamically loading the Intel MPI library. Alternatively, use the environment variable LD_PRELOAD to load the Intel MPI library first.

    To disable the translation cache completely, set the environment variable I_MPI_RDMA_TRANSLATION_CACHE to "disable". Note that you do not need to set the aforementioned environment variables LD_DYNAMIC_WEAK or LD_PRELOAD when you disable the translation cache.

  • (SDK only) Always link the standard libc libraries dynamically if you use the RDMA or RDSSM devices to avoid possible segmentation faults. It is safe to link the Intel MPI library statically in this case. Use the -static_mpi option of the compiler drivers to link the libmpi library statically. This option does not affect the default linkage method for other libraries.

  • Certain DAPL* providers may not work with the Intel® MPI Library, for example:

    • Voltaire* Gridstack*. Contact Voltaire* or download an alternative OFED* DAPL*provider from the OpenFabrics Alliance† website

    • Qlogic* QuickSilver Fabric*. Set the I_MPI_DYNAMIC_CONNECTION_MODE variable to disconnect as work-around, Contact Qlogic*, or download an alternative OFED* DAPL* provider from the OpenFabrics Alliance† website

    • Myricom* DAPL* provider. Contact Myricom* or download alternative DAPL* provider from SourceForge.net DAPL Provider for Myrinet† . The alternative DAPL* provider for Myrinet* supports both the GM* and MX* interfaces

  • GM DAPL* provider may not work with the Intel® MPI Library for Linux* using some versions of GM* drivers. Set I_MPI_RDMA_RNDV_WRITE=1 to avoid this issue.

  • Certain DAPL* providers may not function properly if your application uses system(3), fork(2), vfork(2), or clone(2) system calls. Do not use these system calls or functions based upon them, for example, system(3), with:

    • OFED* DAPL* provider with Linux* kernel version earlier than official version 2.6.16. Set the RDMAV_FORK_SAFE environment variable to enable the OFED workaround with compatible kernel version.

  • The Intel® MPI Library does not support heterogeneous clusters of mixed architectures and/or operating environments.

  • The Intel® MPI Library requires Python* 2.2 or higher for process management.

  • The Intel® MPI Library requires the python-xml* package or its equivalent on each node in the cluster for process management. For example, the following OS does not have this package installed by default:

    • SuSE Linux* Enterprise Server 9

  • The Intel® MPI Library requires the expat* or pyxml* package, or an equivalent XML parser on each node in the cluster for process management.

  • The following MPI-2 features are not supported by the Intel® MPI Library:

    • Process spawning and attachment

  • If installation of the Intel® MPI Library package fails and shows the error message: "Intel® MPI Library already installed" when a package is not actually installed, try the following:

1. Determine the package number that the system believes is installed by typing:

# rpm -qa | grep intel-mpi

this command returns an Intel® MPI Library <package name>.

2. Remove the package from the system by typing:

# rpm -e <package name>

3. Re-run the Intel® MPI Library installer to install the package.

TIP: To avoid installation errors, always remove Intel® MPI Library packages using the uninstall script provided with the package before trying to install a new package or reinstall an older one.

  • Due to an installer limitation, avoid installing earlier releases of the Intel® MPI Library packages after having already installed the current release. It may corrupt the installation of the current release and require that you uninstall/reinstall it.

  • Certain operating system versions have a bug in the rpm command that prevents installations other than in the default install location. In this case, the installer does not offer the option to install in an alternate location.

  • If the mpdboot command fails to start up the MPD, verify that the Intel® MPI Library package is installed in the same path/location on all the nodes in the cluster. To solve this problem, uninstall and re-install the Intel® MPI Library package while using the same <installdir> path on all nodes in the cluster.

  • If the mpdboot command fails to start up the MPD, verify that all cluster nodes have the same Python* version installed. To avoid this issue, always install the same Python* version on all cluster nodes.

  • Presence of environment variables with non-printable characters in user environment settings may cause the process startup to fail. To work around this issue, the Intel® MPI Library does not propagate environment variables with non-printable characters across the MPD ring.

  • A program cannot be executed when it resides in the current directory but "." is not in the PATH. To avoid this error, either add "." to the PATH on ALL nodes in the cluster or use the explicit path to the executable or ./<executable> in the mpiexec command line.

  • The Intel® MPI Library 2.0 and higher supports PMI wire protocol version 1.1. Note that this information is specified as:

pmi_version = 1
pmi_subversion = 1
instead of
pmi_version = 1.1

as done by the Intel® MPI Library 1.0
  • The Intel® MPI Library requires the presence of the /dev/shm device in the system. To avoid failures related to the inability to create a shared memory segment, make sure the /dev/shm device is set up correctly.

  • (SDK only) Certain GNU* C compilers may generate code that leads to inadvertent merging of some output lines at runtime. This happens when different processes write simultaneously to the standard output and standard error streams. In order to avoid this, use the -fno-builtin-printf option of the respective GNU* compiler while building your application.

  • (SDK only) Certain versions of the GNU* libc library define free()/realloc() symbols as non-weak. Use the ld --allow-multiple-definition option to link your application.

  • (SDK only) A known exception handling incompatibility exists between GNU C++ compilers version 3.x and earlier on one hand, and version 4.x on the other hand. Use the special -gcc-version=<nnn> option for compiler drivers mpicxx and mpiicpc to link an application for running in a particular GNU* C++ environment. The valid <nnn> values are:

    • 320 if GNU* C++ version is 3.2.x

    • 330 if GNU* C++ version is 3.3.x

    • 340 if GNU* C++ version is 3.4.x

    • 400 if GNU* C++ version is 4.0.x

    • 410 if GNU* C++ version is 4.1.x

A library compatible with the detected version of the GNU* C++ Compiler is used by default. Do not use this option if the gcc version is older than 3.2.

  • (SDK only) The Fortran 77 and Fortran 90 tests in the <installdir>/test directory may produce warnings when compiled with the mpif77, etc. compiler commands. You can safely ignore these warnings, or add the -w option to the compiler command line to suppress them.

  • (SDK only) In order to use GNU* Fortran Compiler version 4.0 or higher, use the mpif90 compiler driver.

  • (SDK only) A known module file format incompatibility exists between GNU Fortran 95 compilers. The Intel MPI Library mpif90 compiler driver automatically uses the appropriate MPI module.

  • (SDK only) Perform the following steps to generate bindings for your compiler that is not directly supported by the Intel® MPI Library:

1. Go to the binding directory

# cd <installdir>/binding

2. Extract the binding kit

# tar -zxvf intel-mpi-binding-kit.tar.gz

3. Follow instructions in the README-intel-mpi-binding-kit.txt

  • (SDK only) In order to use Intel® Debugger set the IDB_HOME environment variable. It should point to the location of the Intel® Debugger.

  • The Eclipse* PTP 1.0 GUI process launcher is not available on the Itanium® based platform.

  • (SDK only) Use the following command to launch an Intel MPI application with Valgrind* 3.3.0:

# mpiexec -n <# of processes> <other_mpiexec_options> valgrind \
--leak-check=full --undef-value-errors=yes \
--log-file=<logfilename>.%p \
--suppressions=<installdir>/etc/valgrind.supp  <executable>

where:
<logfilename>.%p - log file name for each MPI process
<installdir> - the Intel® MPI Library installation path
<executable> - name of the executable file

 


Operating System:

SUSE* Linux Enterprise Server 10, Red Hat* Enterprise Linux 5.0, SUSE* Linux Enterprise Server 9, Red Hat* Enterprise Linux 4.0

 


有关编译器优化的更完整信息,请参阅优化通知