Fwd: [mvapich-discuss] problem with intra-node communications

Maya Khaliullina maya.usatu at gmail.com
Thu May 7 12:59:02 EDT 2009


---------- Forwarded message ----------
From: Maya Khaliullina <maya.usatu at gmail.com>
Date: 2009/5/7
Subject: Re: [mvapich-discuss] problem with intra-node communications
To: Matthew Koop <koop at cse.ohio-state.edu>


Thanks to all.
Disabling of SRQ feature is really solves our problem.
Thanks again,
Maya

2009/5/7 Matthew Koop <koop at cse.ohio-state.edu>

Hi Maya,
>
> Sorry for the inconvenience. For now, running with MV2_USE_SRQ=0 when
> running an intra-node job should solve this issue. We already have a fix
> for this issue queued for the next release of MVAPICH2.
>
> Matt
>
> On Thu, 7 May 2009, Maya Khaliullina wrote:
>
> > Hello,
> > When we run any MPI program on 4, 6 or 8 processes using intra-node
> > communications only, job hangs at end stage (we believe it occurs during
> > MPI_Finalize). But if we use mvapich2-0.98 or Intel MPI 3.1.26 it works
> > fine.
> > Have any idea about this problem?
> > We compiled mvapich2-1.2 with following parameters:
> > ./configure --prefix=/gpfs/bos/mvapich2-1.2 --enable-romio
> > --disable-debuginfo --enable-sharedlibs=gcc \
> >   --enable-base-cache --with-rdma=gen2 --with-thread-package=pthreads
> > CC=icc
> > CFLAGS=-O3
> > Parameters of our HPC cluster:
> > Node: 2xQuad Core Intel Xeon 2.33 GHz
> > O/S: RHEL4.5
> > File System: GPFS
> >
> > Thanks,
> > Maya
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20090507/66428bcf/attachment-0001.html


More information about the mvapich-discuss mailing list