I have a cluster single node of 4 socket total 32 corers. The systems running Redhat 6.3, and intelmpi 4 update 3. I am using Slurm to start mpi jobs. It seems that whenever I try to run multiple MPI jobs to a single node all the jobs end up running on the same processors. Moreover i notice that the job use all the cores in the node. For example: i started with the first mpi job using Slurm on the node with 8 cores; and i notice that the first mpi task run on 0 to 3 cpus, the2-ndmpi task on 4-7 cpus, and so on the last task on 28-31. Each mpi task used 4 cores instead 1. i started the 2-nd job with 8 cores, and i notice the same and they run on the same 32 cpus of the first job.
there a way to tell mpirun using Slurm to set the taskset affinity correctly at each run so that it will choose only the idle processors according the Slurm?