[mvapich-discuss] MVAPICH 2.0a

Oliver Fuhrer (MeteoSwiss) oliver.fuhrer at ginko.ch
Thu Sep 12 08:54:09 EDT 2013


Dear MVAPICH2 team,

I am intersted in the two features of MVAPICH 2.0a
– (NEW) Dynamic CUDA initialization. Support GPU device selection after MPI Init
– (NEW) Support for running on heterogeneous clusters with GPU and non-GPU nodes

My questions would be the following:

1) From our Fortran code we currently do the following:

! get total number of ranks and current rank from environment variables
call getenv("MV2_COMM_WORLD_SIZE", snumid)
call getenv("MV2_COMM_WORLD_RANK", smyid)

! set device for CUDA runtime
ierr = cudaSetDevice(mydev)

! set device for OpenACC runtime
call acc_set_device_num(mydev, acc_device_nvidia)
call acc_init(acc_device_nvidia)

! initialize MPI library
call MPI_Init(ierr)

Can we simply switch the order of the MPI_Init and device initialization statements and avoid using the environment variables by querying MPI_Comm_Size and MPI_Comm_Rank instead?

2) How can we avoid device initialization from a specific MPI rank?

3) What exactly does the second bullet mean? I thought that - for example - sending messages from a GPU sender buffer to a CPU receiver buffer on another rank was already possible.

Kind regards,
Oli

_________

Oliver Fuhrer
Numerical Models

Federal Departement of Home Affairs FDHA
Federal Office of Meteorology and Climatology MeteoSwiss

Kraehbuehlstrasse 58, P.O. Box 514, CH-8044 Zurich, Switzerland

Tel. +41 44 256 93 59
Fax +41 44 256 92 78
oliver.fuhrer at meteoswiss.ch
www.meteoswiss.ch - First-hand information









More information about the mvapich-discuss mailing list