[mvapich-discuss] mvapich2/2.2 optimization for Intel omni-path?

Jung chulwoo at quark.phy.bnl.gov
Thu Nov 10 16:26:07 EST 2016


Thanks for the reply. What we are looking for ultimately is inter-node 
bandwidth, so for example, it is fine if we have to run  multiple MPI 
async comms with different threads within 1 MPI instance. So far we have 
not seen bandwidth increase from multithreading.

We can have multiple MPI instance per node if that's necessary. However, 
breaking up our application in this manner introduces intra-node 
communication between MPI instances in the node, and observed intra-node 
bandwidth has been similar to inter-node bandwidth, and this is preventing 
us from saturating inter-node bandwidth available from dual rail 
Omni-path.

It is quite possible I am simply not building MVAPICH correctly. Any help 
would be much appreciated.

Chulwoo

  On Wed, 9 Nov 2016, Hari Subramoni wrote:

> Hello Dr. Jung,
> Apologies about the delay. Unfortunately, we do not have a dual-rail
> Omni-Path based cluster locally. We are in touch with Intel folks regarding
> this and will get back to you shortly. In the meantime, could you please let
> us know whether it is the intra-node bandwidth or the inter-node bandwidth
> you are looking to saturate?
> 
> Regards,
> Hari.
> 
> On Tue, Nov 8, 2016 at 2:03 PM, Jung <chulwoo at quark.phy.bnl.gov> wrote:
>       We recently acquired a KNL system with dual Omni-path and are
>       trying to find a way to saturate network bandwidth. Usual,
>       although somewhat suboptimal, stragegy of using multiple MPI per
>       node doesn't seem to work, as the intranode bandwidth is only
>       similar to inter-node bandwith for MPI_Isend and receive.
>
>       MVAPICH2 2.2 feature list inculdes optimized inter- and
>       intra-node communication for KNL. Can someone comment on the
>       performance of Omni-path with KNL and/or pointers for optimizing
>       them?
>
>       Best,
>
>       Chulwoo Jung
>       Physics Department
>       Brookhaven National Laboratory
>       U.S.A.
>       chulwoo at bnl.gov
>       1-631-344-5254
>       _______________________________________________
>       mvapich-discuss mailing list
>       mvapich-discuss at cse.ohio-state.edu
>       http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> 
> 
> 
>

Chulwoo Jung
Physics Department
Brookhaven National Laboratory
U.S.A.
chulwoo at bnl.gov
1-631-344-5254


More information about the mvapich-discuss mailing list