Using Microsoft* Network Direct SPI with Intel® MPI Library

The new Intel® MPI Library version 4.1 Update 1 for Windows* OS now officially supports Network Direct SPI.

InfiniBand* is supported by Intel(R) MPI Library for Windows* OS through the DAPL fabric, so DAPL libraries need to be installed. DAPL is included into OpenFabrics* winOFED* and available to choose during installation. But if a vendor's OFED installed you will probably need to install the DAPL package addtionally.

Please check that you have needed libraries:

C:\>dir /B %SYSTEMROOT%\System32\dapl2-ND.dll
dapl2-ND.dll

C:\>dir /B %SYSTEMROOT%\System32\dat2.dll
dat2.dll

Also check that IP addresses are set for corresponding IP-over-IB adapters. For example,

C:\>ipconfig /all | more
...
Ethernet adapter Ethernet 6:

   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : Mellanox ConnectX IPoIB Adapter #2
...
   IPv4 Address. . . . . . . . . . . : 192.168.2.10(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
...

Then Network Direct can be used by setting I_MPI_DAPL_PROVIDER=ND0. Example:

C:\>mpiexec -genv I_MPI_DEBUG 2 -genv I_MPI_DAPL_PROVIDER ND0 -n 1 -host nnlmpicl202 S:\test.exe : -n 1 -host nnlmpicl203 S:\test.exe
[0] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ND0
[1] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ND0
[0] MPI startup(): DAPL provider ND0
[1] MPI startup(): DAPL provider ND0
[0] MPI startup(): dapl data transfer mode
[1] MPI startup(): dapl data transfer mode
[0] MPI startup(): Internal info: pinning initialization was done
[1] MPI startup(): Internal info: pinning initialization was done
Hello world: rank 0 of 2 running on NNLMPICL202
Hello world: rank 1 of 2 running on NNLMPICL203 

有关编译器优化的更完整信息,请参阅优化通知