[mvapich-discuss] Non-MPI_THREAD-SINGLE mode with enabled MV2 affinity?

Thiago Quirino - NOAA Federal thiago.quirino at noaa.gov
Fri Nov 8 18:41:06 EST 2013


Hi, folks. Quick question about MVAPICH2 and affinity support.

Is it possible to invoke MPI_Init_thread with any mode other than
"MPI_THREAD_SINGLE" and still use "MV2_ENABLE_AFFINITY=1"? In my hybrid
application I mix MPI with raw Pthreads (not OpenMP). I start 4 MPI tasks
in each 16 cores node, where each node has 2 sockets with 8 Sandybridge
cores each. Each of the 4 MPI tasks then spawns 4 pthreads for a total of
16 pthreads/node, or 1 pthread/core. Within each MPI task, the MPI calls
are serialized among the 4 pthreads, so I can use any MPI_THREAD_* mode,
but I don't know which mode will work best. I want to assign each of the 4
MPI tasks in a node a set of 4 cores using MV2_CPU_MAPPING (e.g. export
MV2_CPU_MAPPING=0,1,2,3:4,5,6,7:8,9,10,11:12,13,14,15) so that the 4
pthreads spawned by each MPI task can migrate to any processor within its
exclusive CPU set of size 4.

Is that possible with modes other than MPI_THREAD_SINGLE? If not, do you
foresee any issues with using MPI_THREAD_SINGLE while serializing the MPI
calls among the 4 pthreads of each MPI task? That is, is there any
advantage to using MPI_THREAD_FUNELLED or MPI_THREAD_SERIALIZED versus
MPI_THREAD_SINGLE for serialized calls among pthreads?

Thank you so much, folks. Any help is much appreciated.

Best,
Thiago.


---------------------------------------------------
Thiago Quirino, Ph.D.
NOAA Hurricane Research Division
4350 Rickenbacker Cswy.
Miami, FL 33139
P: 305-361-4503
E: Thiago.Quirino at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20131108/26af2bd7/attachment.html>


More information about the mvapich-discuss mailing list