Using the new TMI fabric for Intel MPI Library

By Gergana S. Slavova,

Published:05/13/2010   Last Updated:08/12/2014

I heard the Intel® MPI Library has a fancy new fabric to enable support for tag matching interfaces (TMI) such as Intel's True Scale PSM* and Myrinet* MX*. How do I enable it?

It’s as easy as 1-2-3:

  1. Create a tmi.conf file for your chosen fabric driver. You can find an example tmi.conf file in the <impi_install_dir>/etc[64] directory.
    • If your cluster is running the Myrinet* MX* drivers, your tmi.conf file should have the following contents:
      $ cat tmi.conf
      # TMI provider configuration
      mx 1.0 libtmip_mx.so " " # comments ok
    • If your cluster is running the Intel's True Scale PSM* drivers, your tmi.conf file should have the following contents:
      $ cat tmi.conf
      # TMI provider configuration
      psm 1.0 libtmip_psm.so " " # comments ok

    Make sure your LD_LIBRARY_PATH includes the Intel MPI Library lib[64] directory. You can do that easily by sourcing the provided mpivars.[c]sh scripts.

    Alternatively, you can specify the full path to the TMI library in the tmi.conf file as illustrated in the following example for the PSM* interface:

    $ cat tmi.conf
    # TMI provider configuration
    psm 1.0 /opt/intel/impi/5.0/lib64/libtmip_psm.so " " # comments ok

     

  2. Position the new tmi.conf file appropriately. The default location for this configuration file is /etc/tmi.conf. If the /etc directory is not shared amongst all nodes, manually copy the tmi.conf file across the cluster.

    Alternatively, you can use the TMI_CONFIG environment variable to point to a new location, as follows:

    $ export TMI_CONFIG=/home/myid/tmi.conf

    If you use the TMI_CONFIG variable as specified, the Intel MPI Library will use the new /home/myid/tmi.conf configuration file instead of /etc/tmi.conf.

     

  3. Enable the use of the tmi fabric on your Intel MPI Library command line by setting the I_MPI_FABRICS environment variable as follows:

    $ export I_MPI_FABRICS=shm:tmi # for shared memory and TMI communication
    or
    $ export I_MPI_FABRICS=tmi # for TMI-only communication

    Alternatively, this can be selected at runtime on your command line:

    $ mpirun –genv I_MPI_FABRICS shm:tmi –n 2 ./exe

     

You can find additional information in the Intel MPI Library Reference Manual, or contact us by posting at the Intel® Clusters and HPC Technology Forums.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804