[mvapich-discuss] GPU to GPU messages in python

Devendar Bureddy bureddy at cse.ohio-state.edu
Sun Mar 31 18:39:46 EDT 2013


Hi Brody

Good to known that you already figured it out. Thanks for the note
regarding PyCUDA gpuarrays.

-Devendar

On Sun, Mar 31, 2013 at 6:32 PM, Brody Huval <brodyh at stanford.edu> wrote:

> Hi Devendar,
>
> Yes, I realized just recently that I had been forgetting the
> MV2_USE_CUDA=1 flag. Sorry about that.
>
> Small side note. If anyone is interested in a small modification to mpi4py
> so it can be used with PyCUDA gpuarrays, this thread will tell you how:
> https://groups.google.com/forum/#!topic/mpi4py/qpM-ZcAtA_Y
>
> Best,
> Brody
>
>
> On Mar 31, 2013, at 3:26 PM, Devendar Bureddy <bureddy at cse.ohio-state.edu>
> wrote:
>
> Hi Brody
>
> In MVAPICH2, the GPU support is enabled with MV2_USE_CUDA=1 run-time flag.
> Did you specify this run-time flag?  If you are already running with this
> flag, can you build with  " --enable-g=dbg --enable-fast=none"  configure
> options and run with MV2_DEBUG_SHOW_BACKTRACE=1 flag to get the the
> backtrace?
>
>
> -Devendar
>
>
>
> On Sun, Mar 31, 2013 at 6:03 AM, Brody Huval <brodyh at stanford.edu> wrote:
>
>> Hi,
>>
>> I am trying to wrap the MVAPICH2 libraries so I may use them with python.
>> To wrap MVAPCH, I am using boost python. I would simply use mpi4py, however
>> they currently do not support GPU to GPU message passing.
>>
>> Once I wrap the functions and call them from python, I will always
>> segfault while sending from GPU memory and not host memory. For example...
>>
>>
>>
>> ////** boost python **////
>> ...
>> int send() {
>>   void* ptr;
>>   int messageSize = 16, dest = 1, tag = 0;
>>   assert(cudaMalloc(&ptr, messageSize) == cudaSuccess);
>>   MPI_Send(ptr, messageSize, MPI_BYTE, dest, tag, MPI_COMM_WORLD);
>>   return 0;
>> }
>>
>> int recv() {
>>   void* ptr;
>>   int messageSize = 16, source = 0, tag = 0;
>>   assert(cudaMalloc(&ptr, messageSize) == cudaSuccess);
>>   MPI_Recv(ptr, messageSize, MPI_BYTE, source, tag, MPI_COMM_WORLD,
>> MPI_STATUS_IGNORE);
>>   return 0;
>> }
>>
>> BOOST_PYTHON_MODULE(libdmpi)
>> {
>>   using namespace boost::python;
>>   ...
>>   def("send", send);
>>   def("recv", recv);
>> }
>> /////////////////////////
>>
>>
>> ////** testmpi.py **////
>> #!/usr/bin/env python
>>
>> import libdmpi
>>
>> libdmpi.MPI_Init()
>> rank = libdmpi.rank()
>>
>> if rank == 0:
>>     libdmpi.send()
>> elif rank == 1:
>>     libdmpi.recv()
>>
>> libdmpi.MPI_Finalize()
>> ///////////////////////
>>
>>
>> This will segfault. However, if I change ptr to host memory, this will
>> work fine. Any idea why this might happen? Here my build configurations
>>
>>
>> $ mpiname -a
>> MVAPICH2 1.9a2 Thu Nov  8 11:43:52 EST 2012 ch3:mrail
>>
>> Compilation
>> CC: gcc -fPIC   -DNDEBUG -DNVALGRIND -O2
>> CXX: c++   -DNDEBUG -DNVALGRIND -O2
>> F77: gfortran -L/lib -L/lib -fPIC  -O2
>> FC: gfortran   -O2
>>
>> Configuration
>> --with-cuda=/usr/local/cuda-5.0 --enable-cuda
>>
>>
>>
>>
>> Thanks in advance for any help.
>>
>> Best,
>> Brody
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>
>
>
> --
> Devendar
>
>
>


-- 
Devendar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20130331/15d1e010/attachment-0001.html


More information about the mvapich-discuss mailing list