Using Intel® MKL MPI wrapper with the Intel® MKL cluster functions

 

1. Introduction.

Intel MKL 11.3 introduced a wrapper code for MPI interface to Intel MKL, which can help users to use Intel® MKL cluster functions with the customized MPI libraries.

While different MPI libraries are compatible on the application programming interface (API) level, they are often incompatible at the application binary interface (ABI) level.  So Intel MKL provides some different libraries to support the different MPIs. For example, one should link with   libmkl_blacs_lp64.a to use application with MPICH*,  libmkl_blacs_openmpi_lp64.a to use Open MPI*.   If users link Intel MKL cluster functions with the customized MPI libraries, which is not supported by Intel MKL, and may create some unexpected result.

Note, only the Intel MKL cluster functions (Cluster Sparse Solver, Cluster FFT, Scala PACK, BLACS) depend on the different MPI implementation.  For other MKL functions, they do not have such dependency.

Intel MKL 11.3 added a MPI wrapper code, so user can extend Intel MKL cluster functions usage to some customized MPI libraries. Users can build custom BLACS library using wrapper code to extend MKL Cluster functions’ support on their MPI libraries.

 

2. Using Intel MKL MPI wrapper code

2.1 Build the Intel MKL wrapper code.

The MKLMPI adaptor is provided with the source code, which can be found in MKLROOT/interfaces/mklmpi directory. To build custom BLACS one should run make (nmake) from this directory with corresponding parameters. For example:

   >make sointel64 interface=ilp64 MPICC=’mpicc’

This command will build custom dynamic MKL BLACS library with default library name libmkl_blacs_custom_ilp64.so and put it in default directory: ../../lib/intel64. For more information, please use “make help“ (or “nmake help” on Windows).

2.2 Linking with custom MKL BLACS library

To link with custom MKL BLACS library, you only need to replace the default Intel MKL BLACS with the custom one (for example, replace mkl_blacs_intelmpi with mkl_blacs_custom).

Note that, when you link with the dynamic library at Windows* system, you should use mkl_blacs_lp64_dll.lib, but before run the application,   it needs to set MKL_BLACS_MPI environment variable.  For example, it needs to set to CUSTOM for the default name of the custom MKL BLACS library (mkl_blacs_custom_lp64.dll for LP64 interface and mkl_blacs_custom_ilp64.dll for ILP64 case), or set to any other  filename that you use for the  custom MKL BLACS library.

  • please note, that dynamic library should be located at Intel MKL redist directory or in the application directory
  • Besides the MKL_BLACS_MPI environment variable, you can use mkl_set_mpi function that was introduced in MKL 11.3 

3. An example:  Using Intel MKL MPI wrapper with Open MPI* at Windows*

Suppose the Open MPI is installed at C:\Program Files (x86)\OpenMPI_v1.6.2-x64. The %MPIROOT% environment variable is set to the installed path

3.1 Build custom MKL BLACS library:

  1. Go to MKL MPI adaptor directory
    \>cd %MKLROOT%\interfaces\mklmpi\
  1. Set environment for Open MPI:
    \>set PATH=%MPIROOT%\bin;%PATH%
    \>set LIB=%MPIROOT%\lib;%LIB%
    \>set INCLUDE=%MPIROOT%\include;%INCLUDE%
  1. Build static MKL BLACS library for LP64 interface with default name mkl_blacs_custom_lp64.lib

    \>nmake libintel64 MPICC="mpicc -DOMPI_IMPORTS -DOPAL_IMPORTS -DORTE_IMPORTS" INSTALL_DIR=%MKLROOT%\lib\intel64 (**)

 

  1. Build dynamic MKL BLACS library for LP64 interface with default name mkl_blacs_custom_lp64.dll
    \>nmake dllintel64 MPICC="mpicc -DOMPI_IMPORTS -DOPAL_IMPORTS -DORTE_IMPORTS" INSTALL_DIR=%MKLROOT%\..\redist\intel64\mkl

3.2 Using the library with the Intel MKL example:

 

  1. Go to the Parallel Direct Sparse Solver for Clusters example directory:
    \>cd %MKLROOT%\examples\cluster_sparse_solverc\source
     
  2. Set environment for MKL:
    \> %MKLROOT%\bin\mkl\mklvars.bat intel64
     
  3. Build and run the application in static linking mode:
    \>mpicc -DOMPI_IMPORTS -DOPAL_IMPORTS -DORTE_IMPORTS cl_solver_sym_sp_0_based_c.c /link mkl_blacs_custom_lp64.lib mkl_intel_lp64.lib mkl_intel_thread.lib mkl_core.lib libiomp5md.lib

    \>mpiexec –np 4 cl_solver_sym_sp_0_based_c.exe
     
  4. Build the code, set correct MKL_BLACS_MPI environment variable and run the application in dynamic linking mode:
    \>mpicc -DOMPI_IMPORTS -DOPAL_IMPORTS -DORTE_IMPORTS cl_solver_sym_sp_0_based_c.c /link mkl_blacs_lp64_dll.lib mkl_intel_lp64.lib mkl_intel_thread.lib mkl_core.lib libiomp5md.lib

    \>set MKL_BLACS_MPI=CUSTOM (*)

    \>mpiexec –np 4 cl_solver_sym_sp_0_based_c.exe

 

        *) we can also set MKL_BLACS_MPI=mkl_blacs_custom_lp64.dll

 

        **) Using compiler flags -DOMPI_IMPORTS -DOPAL_IMPORTS -DORTE_IMPORTS was needed because of peculiarity of our Open MPI* installation, and may be or may be not needed in each specific case.
 

 

 

For more complete information about compiler optimizations, see our Optimization Notice.