[mvapich-discuss] Re: loopback with mvapich2-1.2p1

gossips J polk678 at gmail.com
Fri Aug 7 05:46:01 EDT 2009


Does high latency indicate loopback connections...???
I am referring to the ethtool logs for my adapter interface which has
counter incremented for all loopback connection being made during the test
(either IMB-MPI1 or IMB-EXT).

These counter I am able to see loopback counters in case of Intel MPI (Rdma
as device) but not with (RDSSM as device).

Similar thing should be seen with MVAPICH2.

Am I referring something different than loopback?

Polk.

On Thu, Aug 6, 2009 at 10:44 PM, Dhabaleswar Panda <panda at cse.ohio-state.edu
> wrote:

> > This env variable does not turn out into Loopback connections. I ran with
> > the adapter which supports loopback connection as well.
> > It makes non-loopback connections only.
>
> How are you detecting that it is not working? Have you run OSU latency
> test on your adapter with the shared memory variable being turned `on' and
> `off'. You should see higher latency number when the loopback connection
> is used (i.e., shared memory being `off').
>
> Here are some sample numbers with and without shared memory.
>
> When shared memory is on:
>
> # OSU MPI Latency Test v3.1.1
> # Size            Latency (us)
> 0                         0.79
> 1                         1.00
> 2                         0.99
> 4                         0.99
> 8                         0.99
> 16                        0.99
> 32                        1.01
> 64                        1.05
> 128                       1.14
> 256                       1.25
> 512                       1.39
> 1024                      1.63
> 2048                      1.83
> 4096                      2.48
> 8192                      3.86
> 16384                     7.08
> 32768                    13.79
> 65536                    27.05
> 131072                   44.38
> 262144                   77.17
> 524288                  137.95
> 1048576                 262.82
> 2097152                 499.93
> 4194304                1187.30
>
> When shared memory is off:
>
> # OSU MPI Latency Test v3.1.1
> # Size            Latency (us)
> 0                         1.57
> 1                         1.52
> 2                         1.52
> 4                         1.52
> 8                         1.52
> 16                        1.52
> 32                        1.58
> 64                        1.76
> 128                       2.92
> 256                       3.23
> 512                       3.63
> 1024                      4.39
> 2048                      6.02
> 4096                      7.20
> 8192                      9.76
> 16384                    15.77
> 32768                    21.92
> 65536                    33.20
> 131072                   54.12
> 262144                   99.65
> 524288                  187.62
> 1048576                 367.76
> 2097152                 728.32
> 4194304                1557.02
>
> DK
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20090807/4a88b80a/attachment-0001.html


More information about the mvapich-discuss mailing list