[mvapich-discuss] Performance Drop using MVAPICH-0.97-mlx2.1.0
Pavel Shamis (Pasha)
pasha at mellanox.co.il
Wed May 17 13:14:49 EDT 2006
Sorry it was typo in my previous email, you should use
VIADEV_DEFAULT_MTU=MTU1024
It is run time parameter, so just run:
mpirun_rsh -np X .... VIADEV_DEFAULT_MTU=MTU1024 ./your_test
Or you can add this line
VIADEV_DEFAULT_MTU=MTU1024
To default configuration file $MPI_HOME/etc/mvapich.conf
( MPI_HOME should point to the mvapich installation )
Regards,
Pasha
Pavel Shamis (Pasha) wrote:
> Hi,
> By default rc4 use MTU size 2K and it is not optimal for this HCA.
> In rc5 it will be changed to 1K.
> So in rc4 you can use configuration parameter VIADEV_DEFAULT_MTU=1024
> that will fix the issue.
>
> Regards,
> Pasha
>
> Alfred Torrez wrote:
>> Hi,
>>
>> I installed OpenFabrics OFED-1.0-rc4 (with mvapich-0.97-mlx2.1.0) on a
>> few nodes in our cluster. Using the osu_bw benchmark, I noticed that
>> peak bw performance dropped by about 200MB/sec vs the other nodes that
>> have mvapich-gen2-1.0-105ninstalled. In fact, this is the lowest
>> performance I have ever seen using various versions of mvapich on this
>> cluster. IPoIB and verbs level ping-pong performance did not seem to
>> drop so I am wondering if this is related to a tuning parameter that I
>> need to adjust (I played with some of them)? I did have to upgrade
>> HCA firmware from 3.3.2 to 3.4 due to the "couldn't modify SRQ limit"
>> error.
>>
>> Cluster specifics are:
>>
>> Xeon 2.2GHz
>> FC3 2.6.14.4
>> Mellanox MT23108-CE128 HCA fw. ver. 3.4.0
>>
>> Thanks,
>>
>> Alfred
>>
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
More information about the mvapich-discuss
mailing list