IMPI dapl fabric error

IMPI dapl fabric error

Hi, I'm trying to run HPL benchmark on an Ivybridge Xeon processor with 2 Xeon Phi 7120P MIC cards. I'm using offload xhpl binary from Intel Linpack.

It throws following error

$ bash runme_offload_intel64
This is a SAMPLE run script.  Change it to reflect the correct number
of CPUs/threads, number of nodes, MPI processes per node, etc..

[1] MPI startup(): dapl fabric is not available and fallback fabric is not enabled
[0] MPI startup(): dapl fabric is not available and fallback fabric is not enabled

I checked the same errors on this forum and got to know that to unset I_MPI_DEVICES variable. This made the HPL to run. But performance is very low, just 50%. On the other node, with same hardware, HPL efficiency is 84%. Following is the short output of openibd status from both systems, which shows the difference.

ON NODE with HPL 84%                                                 ON NODE with HPL 50%

Currently active MLX4_EN devices:                               Currently active MLX4_EN devices:

                                                                                        | eth0

Can some one guide me how to resolve it?


8 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

From what I see, you are only running one rank, independently on each node.  Is this your intent?

What InfiniBand* devices do you have in your cluster?

James Tullos
Technical Consulting Engineer
Intel® Cluster Tools

My intent was to know - is the dapl fabric error causing the low HPL performance? The same benchmark is performed separately on two nodes which have the same hardware & software configuration. One is giving 84% and the other is giving 50% hpl efficiency.

First attempt: Executed the benchmark. It came out immediately without running, throwing dapl fabric error.

Second attempt: I used "unset I_MPI_FABRIC & unset I_MPI_DEVICES". The benchmark executed. But performance is just 50%.

My questions: Why there dapl fabric error? What is causing the low performance?

The error you are getting indicates that you do not have DAPL* available on this system.  This will lower performance if you are using multiple nodes.  But from what you're saying, it sounds like you are only using one node.  If you are only using one node, the performance will be unaffected by the network.

Yes, this is a single system benchmark. 

May I know how to check TURBO mode is enabled/disabled, without reboot/going to BIOS on LINUX?

I don't know how you could check Turbo Mode from inside of the operating system other than by attempting to activate it.

Since you're only running on a single node with HPL, I'm going to move this thread to the Intel® Math Kernel Library forum.

whenever I fired below command  


time mpiexec.hydra -machinefile hostfile2 -n 96 ./a.out >out.txt 


bash: /opt/intel//impi/ No such file or directory

[mpiexec@nits-hpc] HYD_pmcd_pmiserv_send_signal (./pm/pmiserv/pmiserv_cb.c:239): assert (!closed) failed

[mpiexec@nits-hpc] ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:127): unable to send SIGUSR1 downstream

[mpiexec@nits-hpc] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status

[mpiexec@nits-hpc] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:435): error waiting for event

[mpiexec@nits-hpc] main (./ui/mpich/mpiexec.c:901): process manager error waiting for completion.


please help


Your problem seems to be related to MPI. Does your cluster have Infiniband, and is the Infiniband correctly installed/configured? First, try to run a simple MPI program on your cluster with the same configuration. Fix any MPI and/or Infiniband issues before you try HPL again.


Leave a Comment

Please sign in to add a comment. Not a member? Join today