[mvapich-discuss] Performance Drop using MVAPICH-0.97-mlx2.1.0

Scott Weitzenkamp (sweitzen) sweitzen at cisco.com
Wed May 17 12:43:40 EDT 2006


This is an OFED packaging bug, see
http://openib.org/bugzilla/show_bug.cgi?id=81.  There is a workaround.
If you recompile MVAPICH using the MVAPICH script make.mvapich.gen2,
throughput will be as expected.

Scott Weitzenkamp
SQA and Release Manager
Server Virtualization Business Unit
Cisco Systems
 

> -----Original Message-----
> From: mvapich-discuss-bounces at cse.ohio-state.edu 
> [mailto:mvapich-discuss-bounces at cse.ohio-state.edu] On Behalf 
> Of Alfred Torrez
> Sent: Wednesday, May 17, 2006 8:11 AM
> To: mvapich-discuss at cse.ohio-state.edu
> Subject: [mvapich-discuss] Performance Drop using 
> MVAPICH-0.97-mlx2.1.0
> 
> Hi,
> 
> I installed OpenFabrics OFED-1.0-rc4 (with 
> mvapich-0.97-mlx2.1.0) on a few 
> nodes in our cluster.  Using the osu_bw benchmark, I noticed 
> that peak bw 
> performance dropped by about 200MB/sec vs the other nodes that have 
> mvapich-gen2-1.0-105ninstalled.  In fact, this is the lowest 
> performance I 
> have ever seen using various versions of mvapich on this 
> cluster.  IPoIB 
> and verbs level ping-pong performance did not seem to drop so I am 
> wondering if this is related to a tuning parameter that I 
> need to adjust (I 
> played with some of them)?  I did have to upgrade HCA 
> firmware from 3.3.2 
> to 3.4 due to the "couldn't modify SRQ limit" error.
> 
> Cluster specifics are:
> 
> Xeon 2.2GHz
> FC3 2.6.14.4
> Mellanox MT23108-CE128 HCA fw. ver. 3.4.0
> 
> Thanks,
> 
> Alfred
> 
> 
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> 



More information about the mvapich-discuss mailing list