Dear Intel HPC community,
I am trying to pin 6 ranks on a dual-socket (12 cores per socket) node, with 4 OpenMP threads per MPI rank. I set I_MPI_PIN_DOMAIN=omp:compact, but I get this I_MPI_DEBUG output:
Just Released Intel® Parallel Studio XE 2019! Accelerate Parallel Code—Transform Enterprise to Cloud, & HPC to AI
I have found that MPI_REDUCE does not perform correctly sum reduction over real(16) variables.
Here is a simple code:
Can Someone Guide me how to configure openmpi with omnipath , we have a intel omnipath managed switch 24 port.
Dear Intel Team
Intel Parallel Studio Cluster Edition, 2017 Update 5, on CentOS 7.3
Dear Intel MPI Gurus,
Here is a Friday post that has a sufficient lack of information that will probably be impossible to answer. I have some older Fortran code I'm trying to improve the performance of.