[mvapich-discuss] [SPAM] Problem building mvapich2 1.4, IB
performance is slow/broken
Dhabaleswar Panda
panda at cse.ohio-state.edu
Tue Nov 24 16:29:55 EST 2009
Craig - This is very surprising. Are you carrying out inter-node or
intra-node experiments? What platform and IB cards you are using? I am not
sure whether the CPU mapping is playing a role here. Can you try with
different CPU mapping and see whether you get expected performance number.
Thanks,
DK
On Tue, 24 Nov 2009, Craig Tierney wrote:
> I am trying to build the production release of mvapich2-1.4.
> Here are the system specs:
>
> Centos 5.3
> OFED-1.4.1
> Intel compilers 11.1
>
> When I try and run a code built with it (OMB), the transfer
> rates are very slow. Here is the output of the run:
>
> [ctierney at wfe7 ~/OMB-3.1.1]$ mpirun -np 2 ./osu_bw
> # OSU MPI Bandwidth Test v3.1.1
> # Size Bandwidth (MB/s)
> 1 0.00
> 2 0.01
> 4 0.02
> 8 0.03
> 16 0.07
> 32 0.10
> 64 0.23
> 128 0.39
> 256 0.91
> 512 1.42
> 1024 2.26
> 2048 2.30
>
> If I build and run this with mvapich2-1.2p1, I get the behavior
> I expect.
>
> I have built mvapich2 with:
>
> ./configure CC=icc CXX="icpc" F77="ifort" FC="ifort" F90="ifort" \
> --with-rdma=gen2 \
> --prefix=/opt/hjet/mvapich2/1.4-intel \
> --enable-romio=yes --with-file-system=lustre \
> --enable-g=dbg --enable-sharedlibs=gcc --enable-debuginfo \
> --enable-threads=multiple
>
> Am I missing something?
>
> Thanks,
> Craig
>
>
>
> --
> Craig Tierney (craig.tierney at noaa.gov)
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
More information about the mvapich-discuss
mailing list