Intel® Clusters and HPC Technology

How to kill MPI program programatically

Dear all,

I am having problem destroying Intel MPI program, the original problem is described at this thread.

http://stackoverflow.com/questions/32222878/intel-mpi-mpirun-does-not-te...

I am using impi/5.0.2.044/intel64, and my program is launched with "mpirun -machinefile mymachinefile ./myprogram"

I followed the suggestion to have the runtime executing "kill -<signal> <pid>", but doesn't work for signal 1, 2, 9, 15.

 

Need help making sense of NBC performance (MPI3)

Hello everyone,

I am fairly new to parallel computing, but am working on a certain legacy code that uses real-space domain decomposition for electronic structure calculations. I have spent a while modernizing the main computational kernel to hybrid MPI+openMP and upgraded the communication pattern to use nonblocking neighborhood alltoallv for the halo exchange and a nonblocking allreduce for the other communication in the kernel. I have now started to focus on "communication hiding", so that the calculations and communication happen alongside each other.

Can each thread on Xeon Phi be given private data areas in the offload model

Hi,

I want to calculate a  Jacobian matrix, which is a sum of 960 (to be simple) 3x3 matrices  by distributing the calculations of these 3x3 matrices to a Xeon Phi card. The calculation of the 3x3 matrices uses a third party library whose subroutines use an interger vector not only for the storage of parameter values but also to write and read intermidiate results. It is therefore necessary for each task to have this integer vector protected from other tasks. Can this be obtained on the physical core level or even for each thread (each Xeon Phi has 60x4=240 threads. 

mpirun with bad hostname hangs with [ssh] <defunct> until Enter is pressed

We have been experiencing hangs with our MPI-based application and our investigation led us to observing the following behaviour of mpirun:

mpirun -n 1 -host <good_hostname> hostname works as expected

mpirun -n 1 -host <bad_hostname> hostname hangs, during which ps shows: 

Varying Intel MPI results using different topologies

Hello,

I am compiling and running a massive electronic structure program on an NSF supercomputer.  I am compiling with the intel/15.0.2 Fortran compiler and impi/5.0.2, the latest-installed Intel MPI library.

The program has hybrid parallelization (MPI and OpenMP).  When I run the program on a molecule using 4 MPI tasks on a single node (no OpenMP threading anywhere here), I obtain the correct result.

However, when I spread out the 4 tasks on 2 nodes (still 4 total tasks, just 2 on each node), I get what seem to be numerical-/precision-related errors.

Debugging Fortran MPI codes in VS2012 and Intel MPI

Hi,

Before this I was using VS2008 with ifort 11 and MPICH.

I folllowed the 1st mtd (by attaching to a currently running process (one VS window for all selected MPI processes)) from:

http://wiki.rac.manchester.ac.uk/community/MPI/VisualStudio_mpich2_howto

It worked but fails for np >= 4. Seems to be MPICH problem.

However, using the new setup, I can't get it to work, even with np = 1 or 2. Error is:

MPICH3.14 with intel c++ complier on os x 10.10.3,Error:dyld: Library not loaded: libiomp5.dylib

Hi friends:

I install MPICH with intel c++ complier on OS X,when I complier the code link with MKL library,and run the code,it gives me error:

mpicxx -mkl testcode.cpp -o testcode

mpiexec -n 3 ./testcode

dyld: Library not loaded: libiomp5.dylib

  Referenced from: /Users/.....

  Reason: image not found

 

how can  I fix this?

thanks.

yongbei

Seg Fault when using US NFS install of MPI 5.1.0.038 from site in Russia

Hello,

One of my team members from Russia is accessing a NFS installation of MPI 5.1.0.038 located at a US site. When this team member runs the simple ring application test.c, she encounters a segmentation fault when running with four processes and one process per node. This does not happen for the team members based at US sites.  The seg fault does not happen when the application is executed on only a single node, the login node.

The test.c application was compiled by each team member in this way (in a user-specific scratch space in the US NFS allocation) :

MPI_Comm_spawn strips empty strings from argv

Hi. I'm using Intel MPI 5 on Windows and have the following problem.  I have a C++ application that spawns worker processes using MPI_Comm_spawn() and passing parameters via the argv argument.  I have an array of strings that is passed this way - however, one of these string is empty.  When the worker process receives the argv array, the empty string has been removed from the array.

Suscribirse a Intel® Clusters and HPC Technology