[mvapich-discuss] What's the equivalent option to "--mca btl" of openmpi

Hari Subramoni subramoni.1 at osu.edu
Fri Mar 14 12:13:40 EDT 2014


Hello Jianyu,

Your understanding is correct on Points #1 & #2.

If you configure the library for Shared-Memory-Nemesis, it will be enabled
by default at runtime. Please note that you don't have to do a separate
build for this. The shared memory channel is actually the same as the
nemesis tcp channel.

Regards,
Hari.


On Fri, Mar 14, 2014 at 9:24 AM, Jianyu Liu <jerry_leo at msn.com> wrote:

> Hi Hari,
>
> Thank you very much for reply.
>
> Please see if I have understood correctly
>
> 1. It only build the default interface detected by configure script into
> libaray if NOT specify multiple network modules
>
>      just like this, it only can run on OpenFabrics,  and can not select
> TCP at run time
>
>       ./configure --prefix=/opt/mvapich2   -with-ib-libpath=/usr/lib64
> --with-ib-include=/usr/include --enable-f77 --enable-fc
>
> 2.  If configured with "--with-device=ch3:nemesis:ib,tcp",
>           it enabled openib, tcp and shared-memory
>           it wil run app on OpenFabrics interface in RDMA mode  by default
> if not specify any MPICH_NEMESIS_NETMOD options
>           it wil run app on OpenFabrics interface in IP over IB mode if
> specify MPICH_NEMESIS_NETMOD=tcp ( will slower than in RDMA mode)
>
> And a few further questions
>
> 3.  If configured with explicit selection of Shared-Memory-Nemesis
>         Is it eanbled at run time by default?
>         How to disable/enable  it at run time?
>
> 4.  Any equivalent options to  btl_openib_ib_min_rnr_timer an
> btl_openib_ib_timeout of OpenMPI  for optimizing failover timing at run
> time?
>
> Appreciating your kindly help
>
> Jianyu
>
>
> ------------------------------
> From: subramoni.1 at osu.edu
> Date: Thu, 13 Mar 2014 09:28:49 -0400
>
> Subject: Re: [mvapich-discuss] What's the equivalent option to "--mca btl"
> of openmpi
> To: jerry_leo at msn.com
> CC: mvapich-discuss at cse.ohio-state.edu
>
>
> Hello Jianyu,
>
> To configure MVAPICH2 use the TCP/IP interface, please follow the
> instructions available at the following link
>
>
> http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-2.0b.html#x1-190004.11
>
> To run applications using the TCP/IP interface, please follow the
> instructions available at the following link
>
>
> http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-2.0b.html#x1-400005.2.9
>
> If you configure the MVAPICH2 library to use the TCP/IP interface, it will
> automatically disable the use of OpenFabrics (IB/iWARP) features. You can
> still use the OpenFabrics interface in IP over IB mode (just like a high
> performance GigE adapter).
>
> Please let us know if you have any further questions.
>
> Regards,
> Hari.
>
>
> On Thu, Mar 13, 2014 at 5:17 AM, Jianyu Liu <jerry_leo at msn.com> wrote:
>
> Dear Sir/Madam,
>
> I'd like to set certain run-time characteristics of MPI  communications.
>
> What's the equivalent options of MVAPICH2  to these "--mca btl" options of
> OpenMPI?
>
> #  Specify to use the TCP network for MPI messages
> 1 ) mpirun  --mca btl tcp,self ...
>
> #  Top explicitly disable the OpenFabrics network
> 2) mpirun --mca btl ^openib  ...
>
> #  Optimizing Failover Timing
> 3)  mpirun --mca btl_openib_ib_min_rnr_timer 25  btl_openib_ib_timeout 20
> ...
>
>
> Thanks for your time
>
> Regards
>
> Jianyu
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20140314/b73630b0/attachment.html>


More information about the mvapich-discuss mailing list