[mvapich-discuss] Questions about collectives on shared memory

Shigang Li shigangli.cs at gmail.com
Wed Oct 17 15:20:45 EDT 2012


Hi all,

I'm a graduate student and recently test some minibenchmarks using MVAPICH2
library on Xeon X5650 cluster. From the manual, I know

*In MVAPICH2, support for shared memory based collectives has been enabled
for MPI applications running over OFA-IB-CH3, OFA-iWARP-CH3, uDAPL-CH3 and
PSM-CH3 interfaces. Currently, this support is available for the following
collective operations:*
*
*
*• MPI Allreduce*
*• MPI Reduce*
*• MPI Barrier*
*• MPI Bcast*

My question is whether the
collectives above were implemented and optimized on top of point-to-point
communication or utilizing shared memory separately?


Best Regards,

Shigang Li.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20121017/f0098177/attachment-0001.html


More information about the mvapich-discuss mailing list