[mvapich-discuss] MCAST feature

Hari Subramoni subramoni.1 at osu.edu
Sun Jul 23 16:40:07 EDT 2017


Hi Vu,

MVAPICH2 uses mcast based solutions for all message sizes. Due to system
limitations with IB multicast, we need to chunk up large messages into
smaller ones which may lead to performance issues at small scale. Multicast
based solutions tend to perform better at large scale. MVAPICH2 has an
environment variable to control the number of nodes after which MVAPICH2
uses multicast (MV2_MCAST_NUM_NODES_THRESHOLD).

The default value for this parameter was tuned for Sandybridge + FDR
combination of processors and HCA. Depending on the type of system you
have, this parameter may differ. Can you try to set this to different
values to see which performs best for your system configuration?

Thx,
Hari.

On Sun, Jul 23, 2017 at 4:28 PM, Hoang-Vu Dang <dang.hvu at gmail.com> wrote:

> Hi Hari,
>
> Thanks for the quick response,
> When I try to enable mcast (it’s not by default), it’s slower in some
> cases..
> Is there anything else to tune ? is it using mcast when messages are
> larger ?
>
> Below is without(default) and with mcast
>
> + srun -n 8 ./osu/osu_bcast
>
> # OSU MPI Broadcast Latency Test v5.3.2
> # Size       Avg Latency(us)
> 1                       2.14
> 2                       2.12
> 4                       2.11
> 8                       2.22
> 16                      2.22
> 32                      2.45
> 64                      2.51
> 128                     2.59
> 256                     2.75
> 512                     2.81
> 1024                    3.06
> 2048                    3.56
> 4096                    4.82
> 8192                    7.19
> 16384                   9.17
> 32768                  13.27
> 65536                  21.18
> 131072                 36.82
> 262144                 67.67
> 524288                128.95
> 1048576               200.37
>
> + export MV2_USE_MCAST=1
> + MV2_USE_MCAST=1
> + srun -n 8 ./osu/osu_bcast
>
> # OSU MPI Broadcast Latency Test v5.3.2
> # Size       Avg Latency(us)
> 1                       1.59
> 2                       1.53
> 4                       1.57
> 8                       1.65
> 16                      1.65
> 32                      1.67
> 64                      1.66
> 128                     1.73
> 256                     2.07
> 512                     2.15
> 1024                    2.42
> 2048                    3.08
> 4096                    3.95
> 8192                    5.54
> 16384                   8.13
> 32768                  13.74
> 65536                  25.09
> 131072                 54.59
> 262144                103.71
> 524288                206.69
> 1048576               414.35
>
> On Jul 23, 2017, at 3:16 PM, Hari Subramoni <subramoni.1 at osu.edu> wrote:
>
> Hi Vu,
>
> Yes - this support is still available in MVAPICH2. It is enabled by
> default at configure time if the system has the necessary driver level
> support (i.e. IB umad libraries). You can enable it at runtime using
> environment variables.
>
> Please refer to the following section of the user guide for more
> information
>
> http://mvapich.cse.ohio-state.edu/static/media/mvapich/
> mvapich2-2.3a-userguide.html#x1-660006.8
>
> Please let us know if you run into any issues with it.
>
> Thx,
> Hari.
>
> On Sun, Jul 23, 2017 at 4:12 PM, Hoang-Vu Dang <dang.hvu at gmail.com> wrote:
>
>> Hi,
>>
>> I would like to test the hardware multicast feature of Inifniband. Is it
>> still available inside MVAPICH2 ? If yes, is there any relevant environment
>> variable ? anything to look for ?
>>
>> For example: how to enable/disable, threshold for performance tuning ? If
>> I choose RC, is the mode available etc..
>>
>> Thanks,
>> Vu
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20170723/f1e73646/attachment-0001.html>


More information about the mvapich-discuss mailing list