[mvapich-discuss] problem with intra-node communications

Maya Khaliullina maya.usatu at gmail.com
Thu May 7 08:01:48 EDT 2009


Hello,
When we run any MPI program on 4, 6 or 8 processes using intra-node
communications only, job hangs at end stage (we believe it occurs during
MPI_Finalize). But if we use mvapich2-0.98 or Intel MPI 3.1.26 it works
fine.
Have any idea about this problem?
We compiled mvapich2-1.2 with following parameters:
./configure --prefix=/gpfs/bos/mvapich2-1.2 --enable-romio
--disable-debuginfo --enable-sharedlibs=gcc \
  --enable-base-cache --with-rdma=gen2 --with-thread-package=pthreads
CC=icc
CFLAGS=-O3
Parameters of our HPC cluster:
Node: 2xQuad Core Intel Xeon 2.33 GHz
O/S: RHEL4.5
File System: GPFS

Thanks,
Maya
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20090507/efb4d4cb/attachment.html


More information about the mvapich-discuss mailing list