[mvapich-discuss] RDMA FAST PATH

Hoang-Vu Dang dang.hvu at gmail.com
Thu Apr 21 14:39:58 EDT 2016


Hi Hari,

I really do not see any different in performance given both osu_latency
and osu_bw. Moreover, when I set MV2_RDMA_USE_FAST_PATH=0 and set
MV2_RDMA_FAST_PATH_BUF_SIZE=0 I got an error, which means
it's still using FAST PATH.

Here is some results:

MV2_USE_SHARED_MEM=0
MV2_RDMA_USE_FAST_PATH=0

osu_bw
# OSU MPI Bandwidth Test v5.3
# Size      Bandwidth (MB/s)
1                       3.09
2                       6.05
4                      12.29
8                      24.48
16                     48.67
32                     96.33
64                    184.83
128                   395.76
256                   768.50
512                  1446.73
1024                 2548.23
2048                 4003.17
4096                 5156.22
8192                 5665.54
16384                5974.79
32768                5963.05
65536                6140.06
131072               6252.00
262144               6312.20
524288               6345.87
1048576              6362.50
2097152              6327.31
4194304              6330.48

MV2_USE_SHARED_MEM=0
MV2_RDMA_USE_FAST_PATH=1

# OSU MPI Bandwidth Test v5.3
# Size      Bandwidth (MB/s)
1                       3.00
2                       5.94
4                      11.90
8                      23.67
16                     47.08
32                     93.82
64                    183.04
128                   380.13
256                   745.09
512                  1386.17
1024                 2483.31
2048                 3967.06
4096                 5150.18
8192                 5670.36
16384                5981.30
32768                5958.61
65536                6140.91
131072               6251.72
262144               6312.80
524288               6345.70
1048576              6361.93
2097152              6333.33
4194304              6335.22

MV2_RDMA_USE_FAST_PATH=0
MV2_RDMA_FAST_PATH_BUF_SIZE=0
MV2_USE_SHARED_MEM=0
# OSU MPI Bandwidth Test v5.3
# Size      Bandwidth (MB/s)
Assertion failed in file src/mpid/ch3/channels/mrail/src/gen2/ibv_priv.c at
line 118: (rdma_fp_buffer_size * num_rdma_buffer)>0



On Thu, Apr 21, 2016 at 1:21 PM, Hari Subramoni <subramoni.1 at osu.edu> wrote:

> Hello,
>
> As the userguide suggests, the environment variable to use to is "
> MV2_RDMA_USE_FAST_PATH=0".
>
>
> http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.2rc1-userguide.html#x1-24600011.86
>
> You should be seeing slight differences in latency - 2% to 7%. You should
> see bigger differences in bandwidth (osu_bw), message rate (osu_mbw_mr) and
> bidirectional bandwidth (osu_bibw).
>
> Thx,
> Hari.
>
> On Thu, Apr 21, 2016 at 2:14 PM, Hoang-Vu Dang <dang.hvu at gmail.com> wrote:
>
>> I'm testing mvapich2 version 2.1 with and without fast-path but I do not
>> see any different in performance.
>>
>> The document said the environment variable is MV2_RDMA_USE_FAST_PATH
>>
>> I set it to 0 and the performance remains identify (osu_latency).
>>
>> On the other hand, when I set the MV2_RDMA_FAST_PATH_BUF_SIZE=0 I got an
>> error:
>>
>> src/mpid/ch3/channels/mrail/src/gen2/ibv_priv.c at line 118:
>> (rdma_fp_buffer_size * num_rdma_buffer)>0
>>
>> When I set MV2_RDMA_FAST_PATH_BUF_SIZE=16 it works but performance is
>> slower significantly.
>>
>> My question is, what is the proper way to disable RDMA FAST PATH and use
>> the send/recv eager protocol ?
>>
>> Vu
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20160421/05024b49/attachment-0001.html>


More information about the mvapich-discuss mailing list