[mvapich-discuss] regarding hybrid MPI/pthread and RDMA

Krishna Kandalla kandalla at cse.ohio-state.edu
Tue Mar 27 13:10:58 EDT 2012


Hi Sandeep,
       Please find my responses below:

On Mon, Mar 26, 2012 at 4:52 PM, Sandeep Gupta <gupta.sandeep at gmail.com>wrote:

> Hi Krishna,
>  I am using mvapich2/1.8a. I believe  in our machine MPI_THREAD_MULTIPLE
> is not enabled. I guess we would have recompile to enable multi-threading
> support.
>

     We have verified that MPI_Init_thread(.., MPI_THREAD_MULTIPLE) works
across various systems. We only require affinity to be disabled when this
option is being used. You do not have to re-configure the library with any
other multi-threading related config options.  The default build
(./configure --prefix=.. ; make ; make install) should suffice.



> I  am little bit confused about the RDMA. If its not much trouble could
> you please clarify.  Consider two threads A and B (in different mpi
> processes). Thread A posts a asynchronous send  (MPI_Isend) while
> thread B  recieves the message using (MPI_Irecv).  Is RDMA invoked in this
> scenario?.  This maybe  a trivial question but what is the benefit  (in
> terms of number of copies) and trade-offs of using RDMA in mvapich2.
>

         Yes. MVAPICH2 will automatically take care of this, while also
ensuring thread-safety. And RDMA will definitely improve communication
performance.

Thanks,
Krishna


>
> You have already answered the important ones. These are just out of
> curiosity and helping me understand and design my algorithm.
>
>
> Thanks
> Sandeep
>
>
> On Mon, Mar 26, 2012 at 12:48 PM, Krishna Kandalla <
> kandalla at cse.ohio-state.edu> wrote:
>
>> Hi Sandeep,
>>         If you are not issuing MPI operations from inside the omp
>> regions, the thread-level should not matter. However, if the openmp threads
>> are calling MPI functions, you could request for MPI_THREAD_MULTIPLE
>> through the MPI_Init_thread call. You do not have to use any config-time
>> options to enable the multi-threading support.
>>         Could you please let us know more about the MVAPICH2 version that
>> you are currently using?  We generally recommend users to disable CPU
>> affinity for the MPI + Openmp applications (
>> http://mvapich/support/user_guide_mvapich2-1.8rc1.html#x1-1040009.1.3).
>> However, if you could upgrade to our latest MVAPICH2 1.8rc1 release, we
>> have improved our CPU affinity support by introducing different binding
>> levels that are specifically geared towards hybrid cases (
>> http://mvapich/support/user_guide_mvapich2-1.8rc1.html#x1-530006.3.1).
>>
>> > And lastly with respect to mvapich how do I go about performing RDMA
>> style
>> > communication. Is it the MPI_Put and MPI_get set of collectives?
>>
>>          MVAPICH2 already uses RDMA-based communication. Please let us
>> know if you have any other questions.
>>
>> Thanks,
>> Krishna
>> (http://www.cse.ohio-state.edu/~kandalla)
>>
>>
>>
>>
>> On Mon, Mar 26, 2012 at 1:48 PM, Sandeep Gupta <gupta.sandeep at gmail.com>
>> wrote:
>> > Hi,
>> >  I am exploring hybrid MPI/pthread programming model and  had couple of
>> > questions/clarifications to ask for.  I wanted to know what options
>> there
>> > exists with mvapich2 for programming in the hybrid model. Currently I am
>> > using MPI with  thread_support_multiple.
>> > My question is that do I have to check for support level
>> > THREAD_SUPPORT_MULTIPLE?. If the program does not have thread support
>> > multiple does that mean it won't work in hybrid mode irrespective of the
>> > mvapich version.
>> > My second question is that are there any alternative way of achieving
>> the
>> > hybrid  computation model?.
>> > And lastly with respect to mvapich how do I go about performing RDMA
>> style
>> > communication. Is it the MPI_Put and MPI_get set of collectives?
>> >
>> > Thanks for taking a look.
>> > Best,
>> > Sandeep
>> >
>> > _______________________________________________
>> > mvapich-discuss mailing list
>> > mvapich-discuss at cse.ohio-state.edu
>> > http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>> >
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20120327/51f4dad8/attachment.html


More information about the mvapich-discuss mailing list