If we are using evaluation version of Intel MPI/compiler/MKL (cluster tools) how many cores/processes we can run HPL on?
I have a problem to launch processes when multiple MPI versions installed. The processes work before I installed latest MPI 5.0.3.048:
C:\Program Files (x86)\Intel\MPI\4.1.3.047>mpiexec -wdir "Z:\test" -mapall -hosts 10 n01 6 n02 6 n03 6 n04 6 n05 6 n06 6 n07 6 n08 6 n09 6 n10 6 Z:\test
However, after I installed MPI 5.0.3.048, the following errors displayed when I launch mpiexec in the environment of 4.1.3.047:
Aborting: unable to connect to N01, smpd version mismatch
Has intel made a statement as to the last know good version of Red Hat that supports Intel MPI 3.2.2.006. We have a new cluster with 20 cores/per node and have observed a fortran system call failing when more than 15 core per node are used. This machine is running Red Hat 6.6 x86_64.
Alternatively are their known conditions where Intel MPI 3.2.2.006 will fail.
In MPI_5.0.3, the MPI_TAG_UB is set to be 1681915906. But internally, the upper bound is 2^29 = 536870912, as tested out by the code attached.
Same code will run just fine in MPI 4.0.3.
Just to let you guys know the problem.
Hope to see the fix soon. Thanks.
2-1. Source Code
Looks like on Linux the intel MPI runtime hardcodes the path of mpivars.sh, eg:
I_MPI_ROOT=/opt/intel/impi/4.1.3.049; export I_MPI_ROOT
On windows on the other hand the path is dynamically generated:
Is there any reason to why the path cannot also be automatically generated on Linux?
I just received my serial number of single user Intel Cluster Studio 2015 (Linux). I had done product registration and generated a license file. However, the license file generation page didn't show any information or steps of "how to apply the license file"...
I am using an evaluation version of Intel Cluster Studio 2015 in Linux workstation. So, I want to use the above license file. I copy the license file into /opt/intel/license folder. Do I need to execute any command? Is there any guidelines or info.
Under Linux OS, I built my MPI application with Intel MPI 5.0.2.044. There is no error in compiling but there are following error in linking MPI library.
I am using the Intel MPI's libmpifort.a and libmpi_mt.a for linking my application.
LIBS = -L/opt/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64 -lmkl_intel_lp64 \
-lmkl_intel_thread -lmkl_core /net/rdnas/home/dingjun/intel/impi/5.0.2.044/intel64/lib/libmpi_mt.a /net/rdnas/home/dingjun/intel/impi/5.0.2.044/intel64/lib/libmpifort.a
I am developing fault tolerant communicaction layer (MPI-like). So this layer is slow now.
Can I learn something about internal Intel MPI architecture to understand basic principles of big data transfering over different networks.
Can I also contact to developers of Intel MPI library to share experiences.
The Intel® MPI Library is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v3.0 (MPI-3.0) specification. This package is for MPI users who develop on and build for Intel® 64 architectures on Linux* and Windows*, as well as customers running on the Intel® Xeon Phi™ coprocessor on Linux*. You must have a valid license to download, install, and use this product.
Recently, we upgraded our system and have installed Mellanox OFED 2.2-1 in order to support native MPI calls between Xeon Phis. Our system is a mixture of non-Phi nodes and Phi nodes.