[mvapich-discuss] Can displs in Scatterv/Gatherv/etc be a GPU array for CUDA-aware MPI?

Chu, Ching-Hsiang chu.368 at buckeyemail.osu.edu
Tue Jun 11 14:58:18 EDT 2019


Hi, Leo,

Currently, MVAPICH2 does not support the case you described. It should work if 'displs' is allocated as the CUDA unified memory, e.g., using cudaMallocManaged, that can be accessed by both CPU and GPU. In general, only sendbuf and/or recvbuf can be GPU-resident data, i.e., allocated by using cudaMalloc.

One workaround for you is to explicitly copy the 'displs' from GPU memory to system memory and pass it to MPI runtime.

Please feel free to contact us if you have any further questions.
Thanks,

Ching-Hsiang Chu

________________________________
From: mvapich-discuss <mvapich-discuss-bounces at cse.ohio-state.edu> on behalf of Fang, Leo <leofang at bnl.gov>
Sent: Tuesday, June 11, 2019 11:58 AM
To: mvapich-discuss at cse.ohio-state.edu
Subject: [mvapich-discuss] Can displs in Scatterv/Gatherv/etc be a GPU array for CUDA-aware MPI?

Hello,


I understand that once MVAPICH is built against CUDA, sendbuf/recvbuf can be pointers to GPU memory. I wonder for MVAPICH whether or not the “displs" argument of the collective calls on variable data (Scatterv/Gatherv/etc) can also live on GPU. CUDA awareness isn’t part of the MPI standard (yet), so I suppose it’s worth asking or even documenting.

Thank you.


ps. The same question was cross-posted to the Open MPI mailing list earlier. While it was answered there, MVAPICH might implement things differently. Please don’t be mad at me if you are receiving an email influx from me this morning :P


Sincerely,
Leo

---
Yao-Lung Leo Fang
Assistant Computational Scientist
Computational Science Initiative
Brookhaven National Laboratory
Bldg. 725, Room 2-169
P.O. Box 5000, Upton, NY 11973-5000
Office: (631) 344-3265
Email: leofang at bnl.gov<mailto:leofang at bnl.gov>
Website: https://leofang.github.io/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20190611/f3c5ba2e/attachment.html>


More information about the mvapich-discuss mailing list