[mvapich-discuss] GPU to GPU messages in python

Devendar Bureddy bureddy at cse.ohio-state.edu
Sun Mar 31 18:26:34 EDT 2013


Hi Brody

In MVAPICH2, the GPU support is enabled with MV2_USE_CUDA=1 run-time flag.
Did you specify this run-time flag?  If you are already running with this
flag, can you build with  " --enable-g=dbg --enable-fast=none"  configure
options and run with MV2_DEBUG_SHOW_BACKTRACE=1 flag to get the the
backtrace?


-Devendar



On Sun, Mar 31, 2013 at 6:03 AM, Brody Huval <brodyh at stanford.edu> wrote:

> Hi,
>
> I am trying to wrap the MVAPICH2 libraries so I may use them with python.
> To wrap MVAPCH, I am using boost python. I would simply use mpi4py, however
> they currently do not support GPU to GPU message passing.
>
> Once I wrap the functions and call them from python, I will always
> segfault while sending from GPU memory and not host memory. For example...
>
>
>
> ////** boost python **////
> ...
> int send() {
>   void* ptr;
>   int messageSize = 16, dest = 1, tag = 0;
>   assert(cudaMalloc(&ptr, messageSize) == cudaSuccess);
>   MPI_Send(ptr, messageSize, MPI_BYTE, dest, tag, MPI_COMM_WORLD);
>   return 0;
> }
>
> int recv() {
>   void* ptr;
>   int messageSize = 16, source = 0, tag = 0;
>   assert(cudaMalloc(&ptr, messageSize) == cudaSuccess);
>   MPI_Recv(ptr, messageSize, MPI_BYTE, source, tag, MPI_COMM_WORLD,
> MPI_STATUS_IGNORE);
>   return 0;
> }
>
> BOOST_PYTHON_MODULE(libdmpi)
> {
>   using namespace boost::python;
>   ...
>   def("send", send);
>   def("recv", recv);
> }
> /////////////////////////
>
>
> ////** testmpi.py **////
> #!/usr/bin/env python
>
> import libdmpi
>
> libdmpi.MPI_Init()
> rank = libdmpi.rank()
>
> if rank == 0:
>     libdmpi.send()
> elif rank == 1:
>     libdmpi.recv()
>
> libdmpi.MPI_Finalize()
> ///////////////////////
>
>
> This will segfault. However, if I change ptr to host memory, this will
> work fine. Any idea why this might happen? Here my build configurations
>
>
> $ mpiname -a
> MVAPICH2 1.9a2 Thu Nov  8 11:43:52 EST 2012 ch3:mrail
>
> Compilation
> CC: gcc -fPIC   -DNDEBUG -DNVALGRIND -O2
> CXX: c++   -DNDEBUG -DNVALGRIND -O2
> F77: gfortran -L/lib -L/lib -fPIC  -O2
> FC: gfortran   -O2
>
> Configuration
> --with-cuda=/usr/local/cuda-5.0 --enable-cuda
>
>
>
>
> Thanks in advance for any help.
>
> Best,
> Brody
>
>
>
>
>
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>



-- 
Devendar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20130331/87bd186d/attachment.html


More information about the mvapich-discuss mailing list