[mvapich-discuss] MVAPICH2 Performance Evaluation

Amit H Kumar AHKumar at odu.edu
Tue Oct 10 23:09:19 EDT 2006


HI MVAPICH2,

Finally able to run some benchmarks!

Now I would like to understand these results. The results are MPI level
Performance.

Running it through PCI-X interface to a Mellanox  MT23108 HCA card[4x
ports, only 1 ACTIVE, other not used].

I am not sure if this question belongs to you, but if you can comment on
this it would be of great help.
Just Curious, Since the switch that we have is InfinIO 3000 (a speed of
10Gbps = 1.16GBytes/s ).

How do I intepret these results with the Specification of Peak System
bandwidth: 640 Gb/s = approx. 74 GB/s; I am no where close to that.
(Specification for the swithc
http://www.xsi.com.au/products/pdf/3000.pdf#search=%22InfinIO%203000%22)

I am no where close. I also happened to read you paper "Performance
Evaluation of InfiniBand with PCI Express", Though the speed and bandwidth
specification of the switch you use is even higher, your throughput it a
little over twice the bandwidth with PCI-X on MT23108 while using only 1
port.

Does this mean that:   Though our switches are capable of delivering higher
performance and bandwidth the local IO interface is not capable enough. In
that case how does these vendors are able to blot down their specification,
is it a theoretical ?

# ../../bin/mpiexec -n 31 ./osu_bw
# OSU MPI Bandwidth Test (Version 2.2)
# Size          Bandwidth (MB/s)
1               0.188841
2               0.381317
4               0.763903
8               1.521096
16              3.044716
32              6.116575
64              12.843706
128             25.303399
256             59.826298
512             153.530483
1024            244.118923
2048            323.132969
4096            385.670410
8192            422.220031
16384           577.216908
32768           568.217557
65536           642.628717
131072          655.408724
262144          653.600712
524288          623.277755
1048576         613.431301
2097152         612.731342
4194304         613.440342


# ../../bin/mpiexec -n 31 ./osu_bibw
# OSU MPI Bidirectional Bandwidth Test (Version 2.2)
# Size          Bi-Bandwidth (MB/s)
1               0.487657
2               0.979861
4               1.942791
8               3.879511
16              7.716088
32              15.603878
64              30.833054
128             60.809929
256             115.833659
512             222.330171
1024            341.777419
2048            421.365689
4096            517.335680
8192            586.964209
16384           699.259729
32768           741.447900
65536           761.773427
131072          772.179365
262144          776.115725
524288          806.153594
1048576         810.225630
2097152         805.788961
4194304         812.346079


# ../../bin/mpiexec -n 31 ./osu_latency
# OSU MPI Latency Test (Version 2.2)
# Size          Latency (us)
0               5.12
1               5.12
2               5.47
4               5.17
8               5.19
16              5.20
32              5.31
64              5.48
128             5.95
256             6.79
512             9.27
1024            10.75
2048            13.49
4096            20.05
8192            33.73
16384           56.19
32768           90.65
65536           159.31
131072          294.61
262144          567.17
524288          1114.59
1048576         2226.43
2097152         4423.12
4194304         8784.33


Thank you for your feedback,
-Amit




More information about the mvapich-discuss mailing list