[mvapich-discuss] MPI_THREAD_MULTIPLE with PThread serializing MPI calls: Impact of Pthread core affinity on MPI over Infiniband

Thiago Quirino - NOAA Federal thiago.quirino at noaa.gov
Fri Dec 20 09:31:40 EST 2013


Hello, folks. Quick question.

In my WRF application, each MPI task spawns 4 Pthreads to parallelize the
execution of numerical integration tasks. The Pthreads perform the same
task in parallel and serialize their access to MPI calls along the code. I
invoke MPI_Thread_init with MPI_THREAD_MULTIPLE because it offers the best
performance in this scenario. I set MV2_ENABLE_AFFINITY=0 in my Linux
environment to enable the use of MPI_THREAD_MULTIPLE.

I am modifying my code to pin each of the 4 Pthreads spawned by each MPI
task to a specific CPU core. My cluster has 12 core Westmere nodes with 2
sockets (6 cores per socket). So far I had allowed the OS to decide the
thread pinning.

My question is, if I pin the Pthreads to specific cores, how would that
impact the performance of MVAPICH2 in MPI_THREAD_MULTIPLE mode? I don't
know how many threads MVAPICH2 spawns per MPI task and whether those
threads will jump around different cores and how that will interact with my
Pthreads to impact my application performance through issues like cache
misses.

Any input is greatly appreciated.

Thank you so much,
Thiago.

---------------------------------------------------
Thiago Quirino, Ph.D.
NOAA Hurricane Research Division
4350 Rickenbacker Cswy.
Miami, FL 33139
P: 305-409-9587
E: Thiago.Quirino at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20131220/0f138fbb/attachment.html>


More information about the mvapich-discuss mailing list