[mvapich-discuss] Problem: MPI process (rank: 0, pid: 3109) exited with status 1...

li.luo at siat.ac.cn li.luo at siat.ac.cn
Mon Jul 22 09:08:00 EDT 2013


Hi,

I want to use MVAPICH2 for GPU-GPU communication. I have installed mvapich1.9 (by root) on my two nodes with configuration:

./configure --prefix=/opt/mvapich2-1.9-gnu --enable-shared --enable-cuda --with-cuda=/home/liluo/lib/cuda_5.0/ --disable-mcast


and make by:

make -j4
make install

Now I want to run the example cpi by my personal account liluo.

For np=2 on one single node, it works.

But it doesn't work for 2 nodes with hostfile as:

gpu1-ib

gpu2-ib

 the output error is the following:


[liluo at gpu1 programs]$ mpirun_rsh -n 2 -hostfile hostfile ./cpi
[cli_0]: aborting job:
Fatal error in MPI_Init:
Other MPI error

[gpu1:mpispawn_0][child_handler] MPI process (rank: 0, pid: 3109) exited with status 1
[gpu1:mpispawn_0][readline] Unexpected End-Of-File on file descriptor 5. MPI process died?
[gpu1:mpispawn_0][mtpmi_processops] Error while reading PMI socket. MPI process died?
[cli_1]: aborting job:
Fatal error in MPI_Init:
Other MPI error

[gpu2:mpispawn_1][readline] Unexpected End-Of-File on file descriptor 5. MPI process died?
[gpu2:mpispawn_1][mtpmi_processops] Error while reading PMI socket. MPI process died?
[gpu2:mpispawn_1][child_handler] MPI process (rank: 1, pid: 3144) exited with status 1


//////////
I use node gpu2-ib as the host node.
I can successfully ping gpu1-ib with gpu2-ib.

And the installation folder for /opt/mvapich2-1.9-gnu and the current folder( where ./cpi is in) on node gpu2-ib have been exported to node gpu1-ib.

What can I do?
--
Li Luo
Shenzhen Institutes of Advanced Technology
Address: 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, P.R.China
Tel: +86-755-86392312£¬+86-15899753087
Email: li.luo at siat.ac.cn



-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20130722/50dab817/attachment.html


More information about the mvapich-discuss mailing list