[mvapich-discuss] MPI+OpenMP configuration

Bruno Mundim bruno.mundim at aei.mpg.de
Wed Jan 9 08:51:51 EST 2013


Hi,

I am facing problems to set numa affinity for a hybrid
MPI+OpenMP code in one of the clusters I use, and I was
wondering if I could set that differently, using MVAPICH2
directly. I have read the manual section on this issue,

http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.9a2.html#x1-730006.16

and that is what I set for other clusters. However, I would
like to know if I set the following options for mpirun_rsh:

mpirun_rsh -np $NP -hostfile ${MPI_NODEFILE} /bin/env MV2_ENABLE_AFFINITY=1
 MV2_USE_AFFINITY=1 MV2_USE_SHARED_MEM=1 MV2_CPU_BINDING_LEVEL=numanode
MV2_CPU_BINDING_POLICY=scatter  OMP_NUM_THREADS=8  ./executable

then will MVAPICH2 prevent the OpenMP threads from running
on the other cores of the SAME numanode the MPI task is bound to?
I am setting these options for a cluster where each compute node
has two numa nodes or two sockets, each with an eight-core
processor. I fix the number of MPI tasks per compute node to 2
in my submission script and use 8 threads per MPI task. From what
I read in the manual I am not sure if those 8 threads will
be allowed to run on the 8 cores or not, even when binding
the MPI task to the numanode (or to the socket) instead to
a core. I would appreciate a comment on this issue.

Thanks,
Bruno.


More information about the mvapich-discuss mailing list