[mvapich-discuss] Problem with mulitiple gpus' send/recv task
feiyulv at mail.ustc.edu.cn
feiyulv at mail.ustc.edu.cn
Sun Jan 25 03:43:08 EST 2015
All
I'm trying to use two MPI processes to control 4 gpu's send/recv between two servers, the procedure is shown as follows:
Server1 Server2
(GPU1) tid1 – MPI_Send ----------------------à (GPU2) tid2 – MPI_Recv
(GPU2) tid2 – MPI_Recv ß---------------------- (GPU1) tid1– MPI_Send
However, I got a cuda error 33 when implementing it on mvapich2.0-gdr. Does one mpi process can just control one gpu to send/recv message? Or some other reasons caused this problem.
Any suggestions will be appreicated
--feiyu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20150125/5061522b/attachment.html>
More information about the mvapich-discuss
mailing list