[mvapich-discuss] High latency on IB?

Dhabaleswar Panda panda at cse.ohio-state.edu
Fri Jun 26 10:58:52 EDT 2009


These numbers are too high for MVAPICH2. Which version of MVAPICH2 and
which interface (gen2, uDAPL, etc.) you are using. What computing
platform, network adapter and switch you are using? Check to see whether
you are configuring the stack properly and whether your systems
(platforms, adapters, switches and cables) are stable.

DK

On Fri, 26 Jun 2009, Sangamesh B wrote:

> Dear Mvapich2,
>
>       The following are the osu latency tests taken with mvapich2 and mpich2.
>
> On IB:
>
> [user at cluster IB_MVAPICH2]$ /opt/mvapich2/bin/mpirun -machinefile
> ibmachines -np 2 ./osu_latency_MVAPICH2
> # OSU MPI Latency Test (Version 2.2)
> # Size          Latency (us)
> 0               20.84
> 1               21.74
> 2               21.74
> 4               21.69
> 8               21.62
> 16              21.67
> 32              21.74
> 64              21.75
> 128             21.79
> 256             22.65
> 512             23.38
> 1024            24.79
> 2048            27.43
> 4096            31.25
> 8192            38.43
> 16384           55.92
> 32768           88.96
> 65536           160.26
> 131072          240.20
> 262144          434.30
> 524288          753.20
> 1048576         1400.61
> 2097152         2619.34
> 4194304         5014.10
> [locuz at cluster IB_MVAPICH2]$
>
> With MPICH2 on ethernet:
>
> [user at cluster ETH_MPICH2]$ /opt/mpich2/bin/mpirun -machinefile
> mpich2macfile -np 2 ./osu_latency_MPICH2
> # OSU MPI Latency Test (Version 2.2)
> # Size          Latency (us)
> 0               62.47
> 1               62.53
> 2               62.49
> 4               62.48
> 8               62.45
> 16              62.47
> 32              62.89
> 64              63.60
> 128             123.22
> 256             124.83
> 512             124.91
> 1024            124.98
> 2048            124.92
> 4096            124.97
> 8192            187.37
> 16384           201.73
> 32768           374.72
> 65536           685.11
> 131072          1186.62
> 262144          2435.70
> 524288          4629.19
> 1048576         9057.72
> 2097152         17981.10
> 4194304         35723.62
> [user at cluster ETH_MPICH2]$
>
> The latency value is very high. Are these right? Because, the osu
> benchmarks taken earlier on other clusters, were starting from 3.
>
> What could be the reason for this? Is there any way to improve it?
>
> I've taken care of mpdboot to use proper interface (i.e. IB or
> ethernet) wrt mvapich2 and mpich2(the same applies to machinefile
> also).
>
> Thank  you
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>



More information about the mvapich-discuss mailing list