[mvapich-discuss] use of parallel ib and ethernet interfaces

Michael E. Thomadakis miket at tamu.edu
Tue Feb 1 13:32:00 EST 2011


thanks for the quick reply

Michael

On 02/01/11 12:24, Sayantan Sur wrote:
> Hi Michael,
>
> On Tue, Feb 1, 2011 at 12:32 PM, Michael E. Thomadakis<miket at tamu.edu>  wrote:
>> On 02/01/11 10:53, Sayantan Sur wrote:
>>> Hi David,
>>>
>>> On Tue, Feb 1, 2011 at 10:35 AM, David Minor<David.Minor at orbotech.com>
>>>   wrote:
>>>> Hi,
>>>>
>>>> I have a cluster with parallel 1G ethernet and IB interfaces.  How can I
>>>> configure/run mvapich2 so that I can switch between them for testing
>>>> purposes?  Also, how do you configure/run on a cluster with mixed eth/ib
>>>> nodes?
>>>>
>>>> I'm a newbie to mvapich2, I've been using Intel MPI which does the
>>>> interface
>>>> selection automatically or allows you to specify which interface you're
>>>> going to use with an environment variable.  I'm still using mpdboot and
>>>> mpiexec,
>>>>
>>>> I already have mvapich up and running over ib only.
>>>>
>>> Currently, we do not support multiple network interfaces within the
>>> same binary. Please refer to the user guide to configure your
>>> installation to use TCP/IP. You will also need to compile the
>>> application binary separately (i.e. have two binaries - one for 1G
>>> ethernet and another for IB).
>>>
>>>
>>> http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.6rc2.html#x1-150004.9
>>>
>>> Thanks.
>>>
>> Can the MVAPICH2 stack get built to be able to support ALL available
>> networking I/Fs so that we can built different MPI binaries to use a
>> different I/F for each binary ?
>>
>> In case this is possible, which options should I enable to at config time to
>> that? Also when I build an MPI binary on MVAPICH2 how do I select which I/F
>> to use for the underlying MPI transport ?
>>
>>
> I think you can currently achieve this by writing a little wrapper
> script around the MPI build process. For each interface that you have
> on the machine, you can build a different version of MVAPICH2. Please
> look at sections 4.4 - 4.10 in the user guide as to how to configure
> MVAPICH2 for each given supported configuration.
>
> http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.6rc2.html#x1-50004
>
> Then, you can keep these MPI builds in different directory and use the
> right mpicc for target interface.
>
> In the future, we may look into providing support for multiple
> interfaces simultaneously (through same binary). However, the MPICH2
> framework currently does not support this.
>
> Thanks.
>
>>>> Regards,
>>>>
>>>> David
>>>>
>> thanks ...
>>
>> Michael
>>
>>>> _______________________________________________
>>>> mvapich-discuss mailing list
>>>> mvapich-discuss at cse.ohio-state.edu
>>>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>>>
>>>>
>>>
>>
>> --
>> % -------------------------------------------------------------------- \
>> % Michael E. Thomadakis, Ph.D.  Senior Lead Supercomputer Engineer/Res \
>> % E-mail: miket AT tamu DOT edu                   Texas A&M University \
>> % web:    http://alphamike.tamu.edu              Supercomputing Center \
>> % Voice:  979-862-3931                    Teague Research Center, 104B \
>> % FAX:    979-847-8643                  College Station, TX 77843, USA \
>> % -------------------------------------------------------------------- \
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>>



More information about the mvapich-discuss mailing list