[mvapich-discuss] Multi bandwidth message rate benchmark results
(osu_mbw_mr)
Johnny Devaprasad
johnnydevaprasad at gmail.com
Thu Apr 7 04:50:26 EDT 2011
Hi all,
My run of osu_mbw_mr provides the following results.
$ mpirun_rsh -np 5280 -machinefile /tmp/382.1.big/machines
/home/cmsupport/mvapich2/osu_benchmarks/osu_mbw_mr
# OSU MPI Multiple Bandwidth / Message Rate Test v3.3
# [ pairs: 2640 ] [ window size: 64 ]
# Size MB/s Messages/s
1 41.34 41339222.14
2 78.96 39479870.86
4 156.71 39176501.04
8 323.37 40421330.96
16 613.74 38358861.20
32 1269.45 39670442.09
64 2388.31 37317373.14
128 4254.61 33239163.57
256 7978.03 31164175.18
512 15421.78 30120662.24
1024 29769.91 29072181.60
2048 47861.16 23369709.15
4096 65596.75 16014831.90
8192 70488.55 8604559.21
16384 70629.90 4310907.20
32768 71233.64 2173878.15
65536 71501.51 1091026.41
131072 71659.65 546719.78
262144 71741.03 273670.31
524288 71765.61 136882.04
1048576 -39592.43 -37758.29
2097152 16099.89 7677.02
4194304 -11749.33 -2801.26
Is there an explanation on why the data rate has negative values for some of
the larger message sizes?
Mvapich2 version: 1.6
Number of nodes: 110
Number of cores : 5280 (48 cores per node)
IB Information:
--------------------
InfiniBand: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB
QDR / 10GigE] (rev b0)
$ ibstat
CA 'mlx4_0'
CA type: MT26428
Number of ports: 1
Firmware version: 2.7.626
Hardware version: b0
Regards,
Johnny
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20110407/5794d61b/attachment.html
More information about the mvapich-discuss
mailing list