[mvapich-discuss] GPU to GPU messages in python
Brody Huval
brodyh at stanford.edu
Sun Mar 31 06:03:53 EDT 2013
Hi,
I am trying to wrap the MVAPICH2 libraries so I may use them with python. To wrap MVAPCH, I am using boost python. I would simply use mpi4py, however they currently do not support GPU to GPU message passing.
Once I wrap the functions and call them from python, I will always segfault while sending from GPU memory and not host memory. For example...
////** boost python **////
...
int send() {
void* ptr;
int messageSize = 16, dest = 1, tag = 0;
assert(cudaMalloc(&ptr, messageSize) == cudaSuccess);
MPI_Send(ptr, messageSize, MPI_BYTE, dest, tag, MPI_COMM_WORLD);
return 0;
}
int recv() {
void* ptr;
int messageSize = 16, source = 0, tag = 0;
assert(cudaMalloc(&ptr, messageSize) == cudaSuccess);
MPI_Recv(ptr, messageSize, MPI_BYTE, source, tag, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
return 0;
}
BOOST_PYTHON_MODULE(libdmpi)
{
using namespace boost::python;
...
def("send", send);
def("recv", recv);
}
/////////////////////////
////** testmpi.py **////
#!/usr/bin/env python
import libdmpi
libdmpi.MPI_Init()
rank = libdmpi.rank()
if rank == 0:
libdmpi.send()
elif rank == 1:
libdmpi.recv()
libdmpi.MPI_Finalize()
///////////////////////
This will segfault. However, if I change ptr to host memory, this will work fine. Any idea why this might happen? Here my build configurations
$ mpiname -a
MVAPICH2 1.9a2 Thu Nov 8 11:43:52 EST 2012 ch3:mrail
Compilation
CC: gcc -fPIC -DNDEBUG -DNVALGRIND -O2
CXX: c++ -DNDEBUG -DNVALGRIND -O2
F77: gfortran -L/lib -L/lib -fPIC -O2
FC: gfortran -O2
Configuration
--with-cuda=/usr/local/cuda-5.0 --enable-cuda
Thanks in advance for any help.
Best,
Brody
More information about the mvapich-discuss
mailing list