pmi_proxy: Command not found

pmi_proxy: Command not found

Hello I am very new to this, I installed the MPI fortran and it was working properly but all of a sudden, i am now getting the following errors. can anyone please help

thank you

: mpirun -np 30 ./a.out

/product/Fortran_MPI/intel64/bin/pmi_proxy: Command not found.

Ctrl-C caught... cleaning up processes

[press Ctrl-C again to force abort]

[mpiexec@l1aebshpc1.bank-banque-canada.ca] HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:101): one of the processes terminated badly; aborting

[mpiexec@l1aebshpc1.bank-banque-canada.ca] HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:18): bootstrap device returned error waiting for completion

[mpiexec@l1aebshpc1.bank-banque-canada.ca] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:521): bootstrap server returned error waiting for completion

[mpiexec@l1aebshpc1.bank-banque-canada.ca] main (./ui/mpich/mpiexec.c:548): process manager error waiting for completion

13 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.
Steve Lionel (Intel)'s picture

I moved your thread to the clustering technology forum as it is not an Intel Fortran issue - this error comes from MPICH.

Steve
James Tullos (Intel)'s picture

Hi Mohab,

What is the value of the environment variable I_MPI_ROOT? Did you source the appropriate mpivars script (usually in /opt/intel/impi//intel64/bin/mpivars.[c]sh)?

Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools

This is the values of the I_MPI_ROOT in my /product/Fortran_MPI/intel64/bin/mpivars.csh

setenv I_MPI_ROOT /product/Fortran_MPI

setenv PATH $I_MPI_ROOT/intel64/bin

setenv PATH $I_MPI_ROOT/intel64/bin:$PATH

setenv LD_LIBRARY_PATH $I_MPI_ROOT/intel64/lib

setenv LD_LIBRARY_PATH $I_MPI_ROOT/intel64/lib:$LD_LIBRARY_PATH

setenv MANPATH $I_MPI_ROOT/man:`manpath`

setenv MANPATH $I_MPI_ROOT/man:$MANPATH

[44] /product/Fortran_MPI :

James Tullos (Intel)'s picture

Hi Mohab,

What do you get from running the command:

mpirun -n 30 hostname

Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools

When i execute mpirun -n 30 hostname i responds the output below and does not return me a prompt.

/product/Fortran_MPI/intel64/bin/pmi_proxy: Command not found.

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

l1aebshpc2.bank-banque-canada.ca

and just hangs there i do not get a prompt back

I have also attached the Verbose output if that might help

Attachments: 

AttachmentSize
Download mpirun_n_30_hostname.txt41.98 KB
James Tullos (Intel)'s picture

Hi Mohab,

It appears that l1aebshpc3 is not finding the MPI bin folder. What is the output from

ls /product/Fortran_MPI/intel64/bin

On that node?

Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools

Wow now i feel silly the mount was not present, I mounted the /product and when executing the mpirun -n 30 hostname it is working. When i run my test a.out seems to be working too.

thank you very much.

James Tullos (Intel)'s picture

Hi Mohab,

Good to hear everything is working now. Feel free to let us know if you have any further concerns.

Sincerely,
James Tullos
Technical Consulting Engineer
Intel Cluster Tools

Feedback on my support? https://premier.intel.com/premier/feedback.aspx
Online assistance: http://support.intel.com/support/performancetools/
User forums: http://softwareforums.intel.com/
Product support info: http://www.intel.com/software/products/support

Hello!

I have built open mpi 1.6 with Intel compilers (2013 versions). Compilation was smooth, however even when I try to execute
the simple program hello.c:

mpirun -np 4 ./hello_c.x
[mpiexec@claudio.ukzn] HYDU_create_process (./utils/launch/launch.c:102): execvp error on file /opt/intel/composer_xe_2013.0.079/mpirt/bin/intel64/pmi_proxy (No such file or directory)
[mpiexec@claudio.ukzn] HYD_pmcd_pmiserv_proxy_init_cb (./pm/pmiserv/pmiserv_cb.c:1177): assert (!closed) failed
[mpiexec@claudio.ukzn] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[mpiexec@claudio.ukzn] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:358): error waiting for event
[mpiexec@claudio.ukzn] main (./ui/mpich/mpiexec.c:689): process manager error waiting for completion

Before that, there was an additional error, since even the file mpivars.sh was not present in /opt/intel/composer_xe_2013.0.079/mpirt/bin/intel64/.
Even though I managed to create one and it worked, I do not have any clue about how to generate the file pmi_proxy.

Thank you in advance for your help!

Gergana Slavova (Intel)'s picture

Hey Guiseppe,

The Intel compilers use the Intel MPI runtimes for their Co-Array Fortran implementation. Because of that, they bundle the Intel MPI runtimes with each installation. This what you're seeing here. Because you've only sourced the compilervars.{csh, sh} file, the only MPI library you're picking up is the Intel MPI runtimes (which requires the use the pmi_proxy script).

If you'd like to use Open MPI instead, you need to set your PATH and LD_LIBRARY_PATH env variables so those scripts are picked up by default. We provide the compilervars and mpivars scripts for our Intel Compiler and Intel MPI Library respectively but I don't believe Open MPI has anything similar.

You simply have to make sure you add the path to the Open MPI 'mpirun' command to PATH, as follows:

$ export PATH=:$PATH

Let us know how that works.

Regards,
~Gergana

Gergana Slavova
Technical Consulting Engineer
Intel® Cluster Tools
E-mail: gergana.s.slavova_at_intel.com
Gergana Slavova (Intel)'s picture

Sorry, I think my command got broken when I posted. Here it is:

$ export PATH=path_to_OpenMPI_mpirun:$PATH

~Gergana

Gergana Slavova
Technical Consulting Engineer
Intel® Cluster Tools
E-mail: gergana.s.slavova_at_intel.com

Hi Gergana,

thank you for your help. Unfortunately, adding to the PATH shell variable the location of mpirun did not help. And anyway, it was already in
the PATH, since mpirun was installed in /usr/local/bin.

It seems as open mpi compiled with Intel compiler is still looking for the file pmi_proxy.
As I mentioned before, it was also looking for mpivars.sh but then I added the following
mpivars.sh to /opt/intel/composer_xe_2013.0.079/mpirt/bin/intel64/:

#!/bin/bash

if [ -z "`echo $PATH | grep /usr/local/bin`" ]; then
export PATH=/usr/local/bin:$PATH
fi

if [ -z "`echo $LD_LIBRARY_PATH | grep /usr/local/lib`" ]; then
if [ -n "$LD_LIBRARY_PATH" ]; then
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
else
export LD_LIBRARY_PATH=/usr/local/lib
fi
fi

I am also attaching the file config.log that was generated before compiling open mpi.

Attachments: 

AttachmentSize
Download config.txt3.82 MB

Login to leave a comment.