[mvapich-discuss] problem with intra-node communications

Matthew Koop koop at cse.ohio-state.edu
Thu May 7 12:43:52 EDT 2009


Hi Maya,

Sorry for the inconvenience. For now, running with MV2_USE_SRQ=0 when
running an intra-node job should solve this issue. We already have a fix
for this issue queued for the next release of MVAPICH2.

Matt

On Thu, 7 May 2009, Maya Khaliullina wrote:

> Hello,
> When we run any MPI program on 4, 6 or 8 processes using intra-node
> communications only, job hangs at end stage (we believe it occurs during
> MPI_Finalize). But if we use mvapich2-0.98 or Intel MPI 3.1.26 it works
> fine.
> Have any idea about this problem?
> We compiled mvapich2-1.2 with following parameters:
> ./configure --prefix=/gpfs/bos/mvapich2-1.2 --enable-romio
> --disable-debuginfo --enable-sharedlibs=gcc \
>   --enable-base-cache --with-rdma=gen2 --with-thread-package=pthreads
> CC=icc
> CFLAGS=-O3
> Parameters of our HPC cluster:
> Node: 2xQuad Core Intel Xeon 2.33 GHz
> O/S: RHEL4.5
> File System: GPFS
>
> Thanks,
> Maya
>



More information about the mvapich-discuss mailing list