I manage a legacy code that has been built with the Intel compiler along with the MPI/Pro library for years, but in the last couple of years we have been trying to convert from MPI/Pro to Intel MPI. To date, we have tried to migrate 3 times using 3 different versions of Intel MPI and very time we have hit a different roadblock. I am trying again and have hit yet another roadblock and I have run out of ideas as to how to resolve it. The code appears to compile fine, but when I run it I get the following runtime error:
Hello everyone I am new to cluster programming in MPI. I have a master-slave program structure. Can anyone tell me, how to execute my MPI program on thi type of setup?
I'm having some trouble running mpirun on my computer. The command mpirun -V returns the following:
/phys/sfw/intel/composer_xe_2013-09/composer_xe_2013_sp1.0.080/mpirt/bin/intel64/mpirun: 96: .: Can't open /phys/sfw/intel/composer_xe_2013-09/composer_xe_2013_sp1.0.080/mpirt/bin/intel64/mpivars.sh
We are intermittently seeing this error message when running an MPI job with the latest MPI Run-Time Library V4:
/usr/diags/mpi/impi/4.1.1.036/bin64/mpiexec -genv LD_LIBRARY_PATH /usr/diags/mpi/impi/4.1.1.036/lib64 -machinefile /tmp/mymachlist.103060.run -n 32 /usr/diags/mpi/intel/intel/bin/olconft.intel RUNTIME=2
mpdroot: cannot connect to local mpd at: /tmp/mpd2.console_root
probable cause: no mpd daemon on this machine
possible cause: unix socket /tmp/mpd2.console_root has been removed
mpiexec_A00A6D99 (__init__ 1524): forked process failed; status=255
I know that if i buy a Intel Cluster Studio with a two user floating licence,
only two people would be able to compile their MPI codes simultaneously.
What i want to know is,
1) How many users can execute their MPI jobs simultaneously?
2) If i have codes compiled with Intel C compiler from the same cluster studio, how many people can execute their simultaneously?
Thanks in advance.
Is it possible to query the chosen fabric and maybe even other related settings from within the application?
I am aware that some of this information can be gathered by setting the I_MPI_DEBUG environment variable appropriately, but I was wondering whether this information is accessible from within the application itself as well.
I guess another solution would be to query the environment variables, but that would only work if they are set explicitly and not if the default settings were used.
on a cluster of Sandy Bridge nodes connected together by FDR IB, we are adding two Intel Phis
per node. There are two possible PCIe slot assignments of the two PHis vs the IB HCA.
a) Both Phis go to PCIe slots with lanes attaching to the same processor socket and the IB HCA
stays on its own on the PCIe lanes of the other socket, and
b) One Phi and the IB HCA go to the PCie lanes of the same socket and the other Phi stays by
itself on the lanes that go to the other processor socket.
I have seriouis troubles to install latest version of MPI Lib (4.1.1.036) on Ubuntu 12.04 64bit. See detailed install log in attached file, for more details.
Any help or hints?