[mvapich-discuss] Benchmark results
admin at genome.arizona.edu
admin at genome.arizona.edu
Wed Jan 24 19:07:11 EST 2018
I tried running some of the benchmarks and was getting speeds much
faster than our network (40Gb infiniband), for example osu_put_bw was
reporting speeds over 10,000 MB/s.... so I just wanted to verify things
were working correctly. The hostfile is configured like this:
n001:1
n002:1
On n001 i used the command "watch -d -n 1 perfquery -x" in order to
monitor the infiniband bandwidth, and it didn't move very much, was
expecting to see gigabytes of data. The mpirun command was run from our
headnode (pac). The output below is from some of the tests.
I tried running some of the tests again with MPICH which wasn't
configured to use infiniband and the reported speeds are similar... I
would think the max bw for 1Gb ethernet would be around 125 MB/s.
Why does the speed reported surpass the network's capability, and why
don't I see more data reported by perfquery?
Thanks
$ mpirun -n 2 ./osu_mbw_mr
[1516836664.543562] [pac:19995:0] sys.c:744 MXM WARN
Conflicting CPU frequencies detected, using: 2101.00
[1516836664.551337] [pac:19994:0] sys.c:744 MXM WARN
Conflicting CPU frequencies detected, using: 2101.00
[1516836664.615265] [pac:19994:0] proto_ep.c:179 MXM WARN tl dc is
requested but not supported
[1516836664.615341] [pac:19995:0] proto_ep.c:179 MXM WARN tl dc is
requested but not supported
# OSU MPI Multiple Bandwidth / Message Rate Test v5.4.0
# [ pairs: 1 ] [ window size: 64 ]
# Size MB/s Messages/s
1 1.57 1568238.92
2 3.47 1734415.30
4 10.98 2744177.63
8 22.03 2754031.56
16 43.63 2726894.11
32 87.33 2729112.00
64 167.59 2618627.02
128 317.88 2483443.94
256 615.49 2404258.45
512 991.41 1936344.63
1024 2020.12 1972774.72
2048 3648.01 1781257.17
4096 5575.05 1361096.52
8192 7640.54 932682.87
16384 8120.92 495661.61
32768 7306.94 222990.08
65536 6115.58 93316.27
131072 7068.91 53931.51
262144 6255.16 23861.54
524288 4799.04 9153.45
1048576 4149.86 3957.62
2097152 4241.41 2022.46
4194304 4421.88 1054.26
$ mpirun -n 2 ./osu_bibw
[1516836740.325164] [pac:20357:0] sys.c:744 MXM WARN
Conflicting CPU frequencies detected, using: 2101.00
[1516836740.326007] [pac:20356:0] sys.c:744 MXM WARN
Conflicting CPU frequencies detected, using: 2101.00
[1516836740.375677] [pac:20357:0] proto_ep.c:179 MXM WARN tl dc is
requested but not supported
[1516836740.375846] [pac:20356:0] proto_ep.c:179 MXM WARN tl dc is
requested but not supported
# OSU MPI Bi-Directional Bandwidth Test v5.4.0
# Size Bandwidth (MB/s)
1 1.87
2 6.12
4 13.12
8 26.15
16 52.54
32 103.86
64 192.62
128 375.00
256 725.77
512 1356.62
1024 2454.05
2048 3824.12
4096 5817.68
8192 7810.70
16384 8115.97
32768 7596.26
65536 5697.60
131072 6471.82
262144 5800.20
524288 6305.01
1048576 7838.19
2097152 4161.10
4194304 4180.82
$ mpirun -n 2 ./osu_put_bw
[1516837452.743091] [pac:23619:0] sys.c:744 MXM WARN
Conflicting CPU frequencies detected, using: 2101.00
[1516837452.743084] [pac:23620:0] sys.c:744 MXM WARN
Conflicting CPU frequencies detected, using: 2101.00
[1516837454.525230] [pac:23620:0] proto_ep.c:179 MXM WARN tl dc is
requested but not supported
[1516837454.846800] [pac:23619:0] proto_ep.c:179 MXM WARN tl dc is
requested but not supported
# OSU MPI_Put Bandwidth Test v5.4.0
# Window creation: MPI_Win_allocate
# Synchronization: MPI_Win_flush
# Size Bandwidth (MB/s)
1 23.88
2 46.52
4 91.77
8 170.57
16 297.64
32 564.01
64 1128.03
128 2150.17
256 4035.20
512 5799.11
1024 7603.81
2048 10214.71
4096 10141.23
8192 7411.61
16384 10592.60
32768 10640.01
65536 10043.78
131072 4290.18
262144 4808.87
524288 4469.42
1048576 3548.38
2097152 3009.57
4194304 2755.88
$ mpirun -n 2 ./osu_latency_mt
[1516836724.398940] [pac:20275:0] sys.c:744 MXM WARN
Conflicting CPU frequencies detected, using: 2101.00
[1516836724.398949] [pac:20276:0] sys.c:744 MXM WARN
Conflicting CPU frequencies detected, using: 2101.00
[1516836724.459278] [pac:20276:0] proto_ep.c:179 MXM WARN tl dc is
requested but not supported
[1516836724.459351] [pac:20275:0] proto_ep.c:179 MXM WARN tl dc is
requested but not supported
# OSU MPI Multi-threaded Latency Test v5.4.0
# Size Latency (us)
0 4.95
1 5.56
2 5.55
4 5.56
8 5.52
16 5.62
32 5.97
64 5.88
128 6.35
256 6.34
512 6.47
1024 6.57
2048 5.12
4096 5.36
8192 10.59
16384 14.14
32768 23.21
65536 21.84
131072 32.38
262144 56.23
524288 118.70
1048576 264.23
2097152 520.68
4194304 1015.48
More information about the mvapich-discuss
mailing list