How to pin application on the second processor?

How to pin application on the second processor?

I've got a machine with 2 processors 8 cores each which gives me a total of 16 physical cores.I want to launch an application on the second processor, cores 8-15. The application uses one mpi process and 8 omp threads.Documentation suggests using I_MPI_PIN_DOMAIN to controll threads distribution. The value omp:compact pinned all threads on the first processor. But I didn't manage to find the way to move them on the second.I have also tried launching the program without any pinning options for MPI but using numactl instead. I've tried numactl both on mpiexec.hydra and the application itself but the threads seem to ignore numactl.So is there a way to solve my problem? Also is there a way to specify what cores can be used by each process?

8 posts / novo 0
Último post
Para obter mais informações sobre otimizações de compiladores, consulte Aviso sobre otimizações.

Hi Pavel,

Have you tried using I_MPI_PIN_PROCESSOR_LIST?  This should allow you to specify by core number.  If your second processor is numbered with cores 8-15, then I_MPI_PIN_PROCESSOR_LIST=8-15 should pin the process to those cores.  I would recommend using cpuinfo to check the core numbering, as it isn't always sequential.

Sincerely,
James Tullos
Technical Consulting Engineer
Intel® Cluster Tools

Yeah, but it doesn't work for OpenMP threads because all the threads are pinned to that one core. No matter how many cores I specify OpenMP threads are pinned to the same core allocated for the process.May be there is an option which allows to specify how many cores are allocated per process? Like --cpus-per-proc in OpenMPI.

I suspect if you wish to use numactl or taskset, you must set I_MPI_PIN=off.  In the past, we used mpiexec .... taskset .... app.....

The suggestion about setting separate PROCESSOR_LIST strings should work with -env option of mpiexec, or inside a script.

If you have a cluster resource manager which allows you to specify a job to get a subset of the cores on a node, that would seem a solution.  As far as I know, after some debate, choice of resource manager has been left entirely up to the cluster provider and sysadmins.

Hi Pavel,

Everything is controlled through the process pinning mechanism.  And yes, you will need to use I_MPI_PIN_DOMAIN, as this allows a process to be pinned to multiple cores.  Using the masklist option will let you specify which cores are available to each process.  In your case, try I_MPI_PIN_DOMAIN=[FF00].

Sincerely,
James Tullos
Technical Consulting Engineer
Intel® Cluster Tools

Yep, explicitly turning off I_MPI_PIN did the trick. I missed that the default was on. Thank you!How should the PROCESSOR_LIST option work for my case? Could you give an example please. As far as I understand it requires I_MPI_PIN to be on which excludes the usage of numactl/taskset but to specify cores for threads I would need to use I_MPI_PIN_DOMAIN option which would ignore PROCESSOR_LIST...And how should script work? Something like this:mpiexec scriptscript:export PROCESSOR_LIST=$P./appright? But how would I determine P? It should depend on the rank, are there any means to find out the rank?

Thank you, that mask worked!As far as I understand I can specify several masks for each subset of the cores on the node and the order of pinning the processes on them, right? In my case [FF00,00FF] would give me to domains: 2-nd and 1-st processors.And what should I do in case with different nodes? For example I have some nodes with 16 cores and some nodes with 8 cores. So I would like to make the domains on the node with 16 cores two times bigger than on the nodes with 8 cores. Is there a way to say: 'if num_cores == 16: domain=[ff00,00ff]; else domain=[f0,0f] ?

Hi Pavel,

That depends on if you're in Windows* or Linux*  In Linux* you can use something like this:

mpirun -n 2 -host 16corehost -env I_MPI_DOMAIN [FF00,00FF] ./a.out : -n 2 -host 8corehost -env I_MPI_PIN_DOMAIN [F0,0F] ./a.out

That should get the behavior you are seeking.  If you are in Windows* (and this also works in Linux*) you will need to use a configuration file with a similar setup, using a different I_MPI_PIN_DOMAIN for each host type.

Sincerely,
James Tullos
Technical Consulting Engineer
Intel® Cluster Tools

Faça login para deixar um comentário.