[mvapich-discuss] Question about collective communication
optimization for shared memory
Shigang Li
shigangli.cs at gmail.com
Sun Oct 14 23:39:27 EDT 2012
Dear Sir or Madam,
I'm running application on SMP clusters and want to get good performance
for collective communications utilizing shared memory feature. I browse the
MVAPICH2.18 manual and it has the following statement:
*
*
*In MVAPICH2, support for shared memory based collectives has been enabled
for MPI applications running over OFA-IB-CH3, OFA-iWARP-CH3, uDAPL-CH3 and
PSM-CH3 interfaces. Currently, this support is available for the following
collective operations:*
*
*
*• MPI Allreduce*
*• MPI Reduce*
*• MPI Barrier*
*• MPI Bcast*
*
*
I want to know does the optimization for collective communication is built
on top of point to point communication? Or it is separate part? Can you
bring me some details about the optimization for shared memory collectives?
*
*
* Best Regards,*
*Shigang?*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20121014/6eac031e/attachment.html
More information about the mvapich-discuss
mailing list