[mvapich-discuss] mvapich 1 vs. mvapich 2 performance
wei huang
huanwei at cse.ohio-state.edu
Wed Jul 16 12:26:07 EDT 2008
Hi Noam,
mvapich and mvapich2 should have very close performance and we have never
seen the difference between the peak bandwidth reported by OSU benchmarks.
May I ask what HCA that you are using on your systems? And are there
multiple HCAs on each node?
CPU affinity can also play a role here. Can you manually set CPU mappings?
You can do that by setting environmental variables:
mvapich1:
mpirun_rsh -np 2 h1 h2 VIADEV_CPU_MAPPING=0 ./a.out
(for detail, see
http://mvapich.cse.ohio-state.edu/support/mvapich_user_guide.html#x1-1440009.6.6)
mvapich2-1.0.3 does not support manual mapping. You can change
VIADEV_CPU_MAPPING from 0 and 1 above and see if CPU mapping is playing a
role here. However, we just released mvapich2-1.2rc1, which will support
cpu mappings. We suggest you try this version as well. If you use
mvapich2-1.2, then you can set mapping by (this version support mpirun_rsh
startup as mvapich1):
mpirun_rsh -np 2 h1 h2 MV2_CPU_MAPPING=0 ./a.out
(http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.2rc1.html#x1-320006.8)
Hope this helps.
-- Wei
> ---------- Forwarded message ----------
> Date: Wed, 16 Jul 2008 10:14:27 -0400
> From: Noam Bernstein <noam.bernstein at nrl.navy.mil>
> To: mvapich-discuss at cse.ohio-state.edu
> Subject: [mvapich-discuss] mvapich 1 vs. mvapich 2 performance
>
> Should I be surprised as this gap in bandwidth between mvapich 1 and
> mvapich 2
> (OSU benchmarks 3.0, osu_bibw)? mpi1 version is quite close to
> expected maximum for IB (8 Gb/s each way), but mpi2 is 25% lower.
>
> Our cluster uses dual processor single core Opterons, Mellanox
> Infiniband
> HCAs with OFED 1.2.5.1, only 1 processor on each node in use.
>
> Below, mpi1 is
> mvapich 1.0.1 compiled with make.mvapich.gen2
> mpi2 is
> mvapich2 1.0.3 compiled with make.mvapich2.ofa
>
> No other flags at compile or run time, everything compiled with gcc.
>
> thanks,
> Noam
>
>
> bibw.mpi1.stdout:Warning: no access to tty (Bad file descriptor).
> bibw.mpi1.stdout:Thus no job control in this shell.
> bibw.mpi1.stdout:orig machines
> bibw.mpi1.stdout:edited machines
> bibw.mpi1.stdout:# OSU MPI Bi-Directional Bandwidth Test v3.0
> bibw.mpi1.stdout:# Size Bi-Bandwidth (MB/s)
> bibw.mpi1.stdout:1 1.51
> bibw.mpi1.stdout:2 3.12
> bibw.mpi1.stdout:4 6.15
> bibw.mpi1.stdout:8 11.83
> bibw.mpi1.stdout:16 23.46
> bibw.mpi1.stdout:32 41.45
> bibw.mpi1.stdout:64 81.72
> bibw.mpi1.stdout:128 156.60
> bibw.mpi1.stdout:256 264.41
> bibw.mpi1.stdout:512 423.20
> bibw.mpi1.stdout:1024 604.07
> bibw.mpi1.stdout:2048 772.51
> bibw.mpi1.stdout:4096 883.79
> bibw.mpi1.stdout:8192 1029.38
> bibw.mpi1.stdout:16384 1469.52
> bibw.mpi1.stdout:32768 1666.29
> bibw.mpi1.stdout:65536 1784.16
> bibw.mpi1.stdout:131072 1685.49
> bibw.mpi1.stdout:262144 1883.22
> bibw.mpi1.stdout:524288 1901.34
> bibw.mpi1.stdout:1048576 1910.08
> bibw.mpi1.stdout:2097152 1917.89
> bibw.mpi1.stdout:4194304 1919.68
>
> bibw.mpi2.stdout:Warning: no access to tty (Bad file descriptor).
> bibw.mpi2.stdout:Thus no job control in this shell.
> bibw.mpi2.stdout:orig machines
> bibw.mpi2.stdout:edited machines
> bibw.mpi2.stdout:# OSU MPI Bi-Directional Bandwidth Test v3.0
> bibw.mpi2.stdout:# Size Bi-Bandwidth (MB/s)
> bibw.mpi2.stdout:1 1.10
> bibw.mpi2.stdout:2 2.21
> bibw.mpi2.stdout:4 4.04
> bibw.mpi2.stdout:8 8.33
> bibw.mpi2.stdout:16 16.07
> bibw.mpi2.stdout:32 30.32
> bibw.mpi2.stdout:64 62.31
> bibw.mpi2.stdout:128 121.45
> bibw.mpi2.stdout:256 216.58
> bibw.mpi2.stdout:512 373.28
> bibw.mpi2.stdout:1024 568.49
> bibw.mpi2.stdout:2048 739.37
> bibw.mpi2.stdout:4096 878.16
> bibw.mpi2.stdout:8192 889.26
> bibw.mpi2.stdout:16384 1079.31
> bibw.mpi2.stdout:32768 1164.42
> bibw.mpi2.stdout:65536 1226.60
> bibw.mpi2.stdout:131072 1227.85
> bibw.mpi2.stdout:262144 1265.47
> bibw.mpi2.stdout:524288 1262.38
> bibw.mpi2.stdout:1048576 1747.40
> bibw.mpi2.stdout:2097152 1582.24
> bibw.mpi2.stdout:4194304 1543.45
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
More information about the mvapich-discuss
mailing list