Using the new TMI fabric for Intel MPI Library

I heard the Intel® MPI Library has a fancy new fabric to enable support for tag matching interfaces (TMI) such as Intel's True Scale PSM* and Myrinet* MX*. How do I enable it?

It’s as easy as 1-2-3:

  1. Create a tmi.conf file for your chosen fabric driver. You can find an example tmi.conf file in the <impi_install_dir>/etc[64] directory.
    • If your cluster is running the Myrinet* MX* drivers, your tmi.conf file should have the following contents:
      $ cat tmi.conf
      # TMI provider configuration
      mx 1.0 libtmip_mx.so " " # comments ok
    • If your cluster is running the Intel's True Scale PSM* drivers, your tmi.conf file should have the following contents:
      $ cat tmi.conf
      # TMI provider configuration
      psm 1.0 libtmip_psm.so " " # comments ok

    Make sure your LD_LIBRARY_PATH includes the Intel MPI Library lib[64] directory. You can do that easily by sourcing the provided mpivars.[c]sh scripts.

    Alternatively, you can specify the full path to the TMI library in the tmi.conf file as illustrated in the following example for the PSM* interface:

    $ cat tmi.conf
    # TMI provider configuration
    psm 1.0 /opt/intel/impi/5.0/lib64/libtmip_psm.so " " # comments ok

     

  2. Position the new tmi.conf file appropriately. The default location for this configuration file is /etc/tmi.conf. If the /etc directory is not shared amongst all nodes, manually copy the tmi.conf file across the cluster.

    Alternatively, you can use the TMI_CONFIG environment variable to point to a new location, as follows:

    $ export TMI_CONFIG=/home/myid/tmi.conf

    If you use the TMI_CONFIG variable as specified, the Intel MPI Library will use the new /home/myid/tmi.conf configuration file instead of /etc/tmi.conf.

     

  3. Enable the use of the tmi fabric on your Intel MPI Library command line by setting the I_MPI_FABRICS environment variable as follows:

    $ export I_MPI_FABRICS=shm:tmi # for shared memory and TMI communication
    or
    $ export I_MPI_FABRICS=tmi # for TMI-only communication

    Alternatively, this can be selected at runtime on your command line:

    $ mpirun –genv I_MPI_FABRICS shm:tmi –n 2 ./exe

     

You can find additional information in the Intel MPI Library Reference Manual, or contact us by posting at the Intel® Clusters and HPC Technology Forums.

Para obtener información más completa sobre las optimizaciones del compilador, consulte nuestro Aviso de optimización.