[mvapich-discuss] High latency on IB?

Sangamesh B forum.san at gmail.com
Sat Jun 27 10:39:46 EDT 2009


Hi,

I had used mvapich2-0.9.8. Voltaire switch, Mellanox cards.

The reason for higher latency, mvapich2 was not linked with ofed libraries.
Now Mvapich2-1.2rc1 is installed, and got following latency:

# OSU MPI Latency Test (Version 2.2)
# Size          Latency (us)
0               3.96
1               4.04
2               4.04
4               4.06
8               4.04
16              4.14
32              4.17
64              4.45
128             5.51
256             6.11
512             6.98
1024            8.50
2048            10.42
4096            14.26
8192            21.76
16384           39.10
32768           59.67
65536           101.40
131072          185.33
262144          351.24
524288          684.08
1048576         1350.54
2097152         2681.89
4194304         5341.90

Another question: Some of the CFD applications (binary distribution)
work with VAPI interface. To build MVAPICH2-0.9.8 for VAPI (new
MVAPICH2 not supporting VAPI), what drivers have to be used?

Is OFED sufficient? If its not, then mention which package has to be
used with the URL to download.

Lets know what are all the possibilities to get mvapich2 installed with VAPI..

--
Thank you

On Fri, Jun 26, 2009 at 8:28 PM, Dhabaleswar
Panda<panda at cse.ohio-state.edu> wrote:
> These numbers are too high for MVAPICH2. Which version of MVAPICH2 and
> which interface (gen2, uDAPL, etc.) you are using. What computing
> platform, network adapter and switch you are using? Check to see whether
> you are configuring the stack properly and whether your systems
> (platforms, adapters, switches and cables) are stable.
>
> DK
>
> On Fri, 26 Jun 2009, Sangamesh B wrote:
>
>> Dear Mvapich2,
>>
>>       The following are the osu latency tests taken with mvapich2 and mpich2.
>>
>> On IB:
>>
>> [user at cluster IB_MVAPICH2]$ /opt/mvapich2/bin/mpirun -machinefile
>> ibmachines -np 2 ./osu_latency_MVAPICH2
>> # OSU MPI Latency Test (Version 2.2)
>> # Size          Latency (us)
>> 0               20.84
>> 1               21.74
>> 2               21.74
>> 4               21.69
>> 8               21.62
>> 16              21.67
>> 32              21.74
>> 64              21.75
>> 128             21.79
>> 256             22.65
>> 512             23.38
>> 1024            24.79
>> 2048            27.43
>> 4096            31.25
>> 8192            38.43
>> 16384           55.92
>> 32768           88.96
>> 65536           160.26
>> 131072          240.20
>> 262144          434.30
>> 524288          753.20
>> 1048576         1400.61
>> 2097152         2619.34
>> 4194304         5014.10
>> [locuz at cluster IB_MVAPICH2]$
>>
>> With MPICH2 on ethernet:
>>
>> [user at cluster ETH_MPICH2]$ /opt/mpich2/bin/mpirun -machinefile
>> mpich2macfile -np 2 ./osu_latency_MPICH2
>> # OSU MPI Latency Test (Version 2.2)
>> # Size          Latency (us)
>> 0               62.47
>> 1               62.53
>> 2               62.49
>> 4               62.48
>> 8               62.45
>> 16              62.47
>> 32              62.89
>> 64              63.60
>> 128             123.22
>> 256             124.83
>> 512             124.91
>> 1024            124.98
>> 2048            124.92
>> 4096            124.97
>> 8192            187.37
>> 16384           201.73
>> 32768           374.72
>> 65536           685.11
>> 131072          1186.62
>> 262144          2435.70
>> 524288          4629.19
>> 1048576         9057.72
>> 2097152         17981.10
>> 4194304         35723.62
>> [user at cluster ETH_MPICH2]$
>>
>> The latency value is very high. Are these right? Because, the osu
>> benchmarks taken earlier on other clusters, were starting from 3.
>>
>> What could be the reason for this? Is there any way to improve it?
>>
>> I've taken care of mpdboot to use proper interface (i.e. IB or
>> ethernet) wrt mvapich2 and mpich2(the same applies to machinefile
>> also).
>>
>> Thank  you
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>
>



More information about the mvapich-discuss mailing list