[mvapich-discuss] problem with intra-node communications

Jonathan Perkins perkinjo at cse.ohio-state.edu
Thu May 7 08:21:41 EDT 2009


On Thu, May 07, 2009 at 06:01:48PM +0600, Maya Khaliullina wrote:
> Hello,
> When we run any MPI program on 4, 6 or 8 processes using intra-node
> communications only, job hangs at end stage (we believe it occurs during
> MPI_Finalize). But if we use mvapich2-0.98 or Intel MPI 3.1.26 it works
> fine.
> Have any idea about this problem?

Have you tried mvapich2-1.2p1?  This contains a patch that may resolve
your issue.  Please let us know if it does.

> We compiled mvapich2-1.2 with following parameters:
> ./configure --prefix=/gpfs/bos/mvapich2-1.2 --enable-romio
> --disable-debuginfo --enable-sharedlibs=gcc \
>   --enable-base-cache --with-rdma=gen2 --with-thread-package=pthreads
> CC=icc
> CFLAGS=-O3
> Parameters of our HPC cluster:
> Node: 2xQuad Core Intel Xeon 2.33 GHz
> O/S: RHEL4.5
> File System: GPFS
> 
> Thanks,
> Maya

> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss


-- 
Jonathan Perkins
http://www.cse.ohio-state.edu/~perkinjo
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: not available
Url : http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20090507/2492e893/attachment.bin


More information about the mvapich-discuss mailing list