[mvapich-discuss] Announcing the release of MVAPICH2 0.9.5 with SRQ, intergrated multi-rail and TotalView support

Pavel Shamis (Pasha) pasha at mellanox.co.il
Thu Sep 7 03:57:31 EDT 2006


Sayantan Sur wrote:
> Pasha,
> 
> Pavel Shamis (Pasha) wrote:
> 
>> You measurements are absolutely correct the difference in IB send/recv 
>> latency is much bigger. But in most cases in mvapich small messages 
>> will be send via fast_path that doesn't effected by SRQ performance, 
>> is not it ?
> 
> That is correct, small messages will be sent over RDMA but only as long 
> as RDMA buffers are available. I don't expect to see any impact in the 
> latency numbers (since its ping-pong), but the OSU bandwidth numbers 
> will be adversely affected. This is because, the OSU bandwidth test 
> window is larger than the number of available RDMA buffers. Increasing 
> the number of RDMA buffers per connection is not an option since it will 
> hurt scalability. This is the reason we do not recommend the use of SRQ 
> on PCI-X HCAs.
Ok I understand your point. BTW with latest mvapich changes (I mean 
patches 08-19) you will not see the difference with the benchmarks.


More information about the mvapich-discuss mailing list