Intel® MPI Library for Linux* Tips and Tricks - FAQ: Part 1 of 2


Why does my program fail at runtime with a "Null Comm pointer" error?
In most cases the "Null Comm pointer" error is caused by an MPI header mismatch. Verify that you do not use the headers from another MPI implementation.

At what values is the amount of debug information increased?
The reasonable values are 2, 3, 10, 20, 30, and 200. The higher value is used, the more debug information is provided. We recommend using 1001 to get all information.

Why do I get "Permission denied" diagnostic when I start an MPD ring?
I enter the following command and get "Permission denied" diagnostic:

headnode1 ~--> mpdboot -n 5 -r rsh 
Permission denied.
Permission denied. 
headnode1 ~--> mpdtrace -l

Make sure that all the nodes, not only the head ones used for the mpdboot start, can connect to each other via rsh/ssh without a password.

How do I uninstall the product if the product directory was removed?

Use the rpm -qa commands to find out the exact name of the Intel® MPI package, and the rpm --erase <package> command to remove Intel® MPI Library from the system.

How do I pin processes to prevent undesired migration?

The Intel MPI Library automatically pins processes to the CPUs to prevent undesired process migration. Use the I_MPI_PIN, I_MPI_PIN_MODE, I_MPI_PIN_PROCESSOR_LIST and I_MPI_PIN_DOMAIN environment variables to control process pinning. Set the I_MPI_PIN_DOMAIN to run hybrid (multi thread) applications. For example:

mpiexec -env I_MPI_DEVICE shm -env  OMP_NUM_THREADS 4
-env I_MPI_PIN_DOMAIN omp -np 2 ./prog

See the Intel MPI Library Reference Manual for more details.

How do I use an alternative IP interface for MPI communication?

Put the respective IP addresses or host names into the mpd.hosts file when you start the MPD* ring. If these addresses correspond to the desired fast network, Intel MPI job will use this network for all communication. You can select this way, for example, IP over IBA*, IP over Myrinet*, or alternative Gigabit Ethernet* network if it is available in your system. Use either the IP addresses or host names consistently in the mpd.hosts file and in the mpiexec invocation string.

Alternatively, use the I_MPI_NETMASK environment variable to choose the network interface for MPI communication over sockets. For example:

mpiexec -env I_MPI_DEVICE ssm -env  I_MPI_NETMASK eth0 -np 2 ./prog

See the Intel MPI Library Reference Manual for more details.

How do I propagate shell limits?
The Intel MPI Library does not propagate shell limits across the job. Whatever limits are valid at the time of the MPD ring startup on a particular node will be used by all instances of the Intel MPI Library jobs afterwards.

To set and propagate core size limit under Bash shell, do the following:

mpiexec -n <#ranks> /bin/sh -c "ulimit -c 0 ; ./a.out"

Replace the "ulimit -c 0;" by the limit setting command you need. Equivalent commands can be used under other shells.

The /etc/security/limits.conf file may also be used to set global, per-group, or individual per-user default limits.

How do I learn what version of the Intel MPI Library is installed on the system?
Use the mpiexec -V command to learn the Intel MPI Library version:

mpiexec -V

This will output version information.
If it is an official package, look it up in the mpisupport.txt file or in the Release Notes and search for a version information there:

cat <install_dir>/mpisupport.txt

If the Intel MPI Library is installed in RPM mode, try to query the RPM database:

rpm -qa | grep intel-mpi

Finally, for full identification information set I_MPI_VERSION to 1 and run any MPI program, grepping for "Build".

mpiexec -n 2 -env ./a.out| grep -i build

This will turn up a couple of lines with the build date. Most of this information is also imbedded into the library and can be queried using the string utility:

strings <install_dir>/lib/ | grep -i build

Is it possible to install the Intel® MPI on exported share to get it from one place on all nodes?
Yes. It is possible to install the Intel MPI Library on exported share.

For more complete information about compiler optimizations, see our Optimization Notice.