[mvapich-discuss] Is MPI_Alltoallv a GPU aware routine?

Devendar Bureddy bureddy at cse.ohio-state.edu
Thu Jul 18 22:08:58 EDT 2013


The other parameters(sendcnts, sdispls) are host arrays.

-Devendar


On Thu, Jul 18, 2013 at 9:56 PM, <li.luo at siat.ac.cn> wrote:

> Thanks. By using this routine for sending device buffer, I wonder whether
> the other parameters such as int *sendcnts, int *sdispls are host arrays
> (by 'malloc') or device arrays (by 'cudaMalloc')?
>
> At2013-07-19 03:11:48,li.luo at siat.ac.cnwrote:
>
> Hi Li Luo
>
> You can use device buffer with all the collectives including MPI_Alltoallv
>
> -Devendar
>
>
> On Thu, Jul 18, 2013 at 1:21 AM, <li.luo at siat.ac.cn> wrote:
>
>>
>> Hi,
>>
>> We know that mvapich is a GPU-aware MPI, device buffer can be sent and
>> received directly via
>> MPI_Send(s_device,size,…);
>> MPI_Recv(r_device,size,…);
>>
>> So what about other routines such as MPI_Alltoallv, is it able to handle
>> device buffer directly? If it can, should the integer array be device
>> pointers? Such as int *sendcnts, int *sdispls?
>>
>>
>>
>> --
>> Li Luo
>> Shenzhen Institutes of Advanced Technology
>> Address: 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen,
>> P.R.China
>> Tel: +86-755-86392312,+86-15899753087
>> Email: li.luo at siat.ac.cn
>>
>>
>>
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>>
>
>
> --
> Devendar
>
>
>
>
>


-- 
Devendar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20130718/082e3521/attachment-0001.html


More information about the mvapich-discuss mailing list