[mvapich-discuss] Question with bandwidth tests of OSU Microbenchmarks

Subramoni, Hari subramoni.1 at osu.edu
Fri Dec 15 18:36:51 EST 2017


Hello,

The current benchmark uses single buffer to determine the maximum performance. The benchmark can be easily modified to use different buffers. If you do this, you should not be seeing the cache effects that are leading to such high numbers.

Regards,
Hari.

From: mvapich-discuss-bounces at cse.ohio-state.edu On Behalf Of Junchao Zhang
Sent: Wednesday, December 13, 2017 10:53 AM
To: mvapich-discuss at cse.ohio-state.edu <mvapich-discuss at mailman.cse.ohio-state.edu>
Subject: [mvapich-discuss] Question with bandwidth tests of OSU Microbenchmarks

Hello,
   I used the latest osu_mbw_mr and measured bandwidth between two sockets of a NUMA node. I found the bandwidth for middle-sized messages was very large, even bigger than memory bandwidth of the node. I don't understand why.  Shouldn't it be bound by QPI bandwidth of the node?

   Also, I find in osu_mbw_mr.c a receiver issues multiple MPI_Irecv's, but they all have the same r_buf, which means the received data is overlapped. Is it a bug or done intentionally?

  Here is my test result on an Intel Xeon Haswell node with 2 sockets, and 32 cores. The QPI bandwidth is 38.4GB/s. The STREAM copy bandwidth is 110GB/s. I used mvapich2-2.1-intel-13.1. When running the code, I put first 16 ranks on socket 0 and the next 16 on socket 1. Could you shed some light on that?  Thank you.


# OSU MPI Multiple Bandwidth / Message Rate Test v5.4.0
# [ pairs: 16 ] [ window size: 64 ]
# Size                  MB/s        Messages/s
1                      67.89       67888521.24
2                     140.70       70348753.88
4                     273.76       68440922.18
8                     558.12       69764549.69
16                    984.26       61516508.73
32                   1959.77       61242938.77
64                   2901.28       45332460.41
128                  5830.29       45549103.35
256                 12216.54       47720864.66
512                 25322.88       49458754.12
1024                40257.18       39313651.61
2048                68684.93       33537564.96
4096                79012.29       19290110.86
8192                88479.92       10800771.52
16384              112174.38        6846581.09
32768              125500.76        3829979.11
65536              134492.98        2052200.00
131072             136100.17        1038361.91
262144             132753.53         506414.51
524288             125918.77         240171.00
1048576             74015.27          70586.46
2097152             30469.80          14529.13
4194304             27742.19           6614.25


--Junchao Zhang
-------------- next part --------------
A non-text attachment was scrubbed...
Name: winmail.dat
Type: application/ms-tnef
Size: 12907 bytes
Desc: not available
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20171215/c7ad30ae/attachment-0001.bin>


More information about the mvapich-discuss mailing list