[mvapich-discuss] Re: Interleaved Isend/Irecv with CUDA vs. asynchronous

sreeram potluri potluri at cse.ohio-state.edu
Wed Sep 26 09:58:11 EDT 2012


Jens,

Sorry about the delayed response. Good to know that it works for you. Do
let us know if you encounter any further issues.

Sreeram Potluri

On Wed, Sep 26, 2012 at 4:04 AM, Jens Glaser <jglaser at umn.edu> wrote:

> Replying to my own email, the error message was caused by the compute
> exclusive mode of the CUDA device being set to exclusive_thread.
> Setting it to exclusive_process resolved the error.
>
> Jens
>
> On Sep 25, 2012, at 9:09 PM, Jens Glaser wrote:
>
> > Hi all,
> >
> > I am using MVAPICH2 1.9a. In my MPI cuda application, I have pairs
> non-blocking Isend/Irecvs. When visualizing profiling output (using
> VampirTrace) it appears that during the call to MPI_Waitall(),
> > the send/recv operations still seem to be interleaved, e.g. process 0
> waits to recceive data from process 1 before it fill its send buffer using
> a cudaMemcpyAsync.
> > I tried to set MPICH_ASYNC_PROGRESS=1 and MV2_ENABLE_AFFINITY=0, but
> then I am getting the following error
> >
> > [2] Abort: cudaMemcpyAsync from device to host failed
> > at line 2244 in file ch3_smp_progress.c
> >
> > Is asynchronous Isend/Irecv not completely supported by MVAPICH2?
> >
> > Jens
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20120926/9f59c941/attachment.html


More information about the mvapich-discuss mailing list