[mvapich-discuss] mvapich 1 vs. mvapich 2 performance

Christian Guggenberger christian.guggenberger at rzg.mpg.de
Tue Jul 22 14:55:13 EDT 2008


On Wed, Jul 16, 2008 at 10:14:27AM -0400, Noam Bernstein wrote:
> Should I be surprised as this gap in bandwidth between mvapich 1 and  
> mvapich 2
> (OSU benchmarks 3.0, osu_bibw)?  mpi1 version is quite close to
> expected maximum for IB (8 Gb/s each way), but mpi2 is 25% lower.
>
> Our cluster uses dual processor single core Opterons, Mellanox  
> Infiniband
> HCAs with OFED 1.2.5.1, only 1 processor on each node in use.
>

Just curious because in a different thread (about fork() et al.) you
also talked about performance problems, which were solvable using CPU
affinity/mappings. Is this also the case here?

(background: I am also looking into a real-app performance drop-down
with mvapich2 vs. mvapich. This particular code evens shows the
degradation when run with only one MPI-task and, in that case, even with mpich2. 
Furthermore, I cannot reproduce these results on core2-based xeons. So
far I was only able reproduce it on Opteron with PCI-X HCAs
(tavour-based). I thus would asperse NUMA vs SMP here...)

cheers.
 - Christian



More information about the mvapich-discuss mailing list