[mvapich-discuss] Equivalence of OMPI's mpi_leave_pinned parameter

Jens Glaser jglaser at umn.edu
Sat Nov 3 23:51:34 EDT 2012


Hi,

I am developing an MPI/CUDA application which uses both page-locked host buffers and CUDA buffers for communication.
Using CUDA buffers directly with MVAPICH2 works great, however I had a terrible time tracking down the occurrence of bad
program behavior when using also page-locked memory (allocated using cudaHostAlloc). It seem's MVAPICH2's internal
handling of these buffers interferes with device-host RDMA (GPUDirect), if the latter is not left entirely to the library.
The same is true for OpenMPI, but fortunately OpenMPI has an MCA option mpi_leave_pinned, which can be turned off, and which in turn prevents the crashes.
Also, the OpenMPI docs state that "other" MPI libraries have this parameter turned on by default (including MVAPICH2, I suspect).

Since I appreciate MVAPICH2's CUDA capabilities, I wonder if the library offers a similar way of turning off the optimizations that lead to my application
failing.

Thanks
Jens




More information about the mvapich-discuss mailing list