[mvapich-discuss] OSU benchmarks interpretation

Nikita Andreev nik at kemsu.ru
Wed Mar 2 02:03:42 EST 2011


I'm benchmarking bandwidth between two compute nodes equipped with Mellanox
ConnectX DDR InfiniBand two-port HCAs. I run benchmarks under OpenMPI which
supports dual-rail configurations.

 

Results for message size 4194304:

 

osu_bw               4917.75 MB/s

osu_bibw           5007.49 MB/s

osu_put_bw     3489.35 MB/s

osu_put_bibw  3876.96 MB/s

osu_get_bw      3482.18 MB/s

 

I have several questions:

 

1. As far as I understand DDR IB has 16Gb/s data rate. Hence dual-rail has
32Gb/s or 4GB/s theoretical peak throughput. But osu_bw shows data rate
higher than theoretical. How is that possible?

 

2. osu_bw is unidirectional and osu_bibw is bidirectional test. So I suppose
it should have two times higher throughput but it's almost the same as
unidirectional.

 

3. RDMA put/get do not involve target node in operation and should be faster
than ordinary send/recv. Why are they slower?

 

Regards,

Nikita

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20110302/2b61a1ab/attachment.html


More information about the mvapich-discuss mailing list