Will evaluate this patch and feedback [mvapich-discuss] mvapich2 1.5.1p1 doesn't honour,MV2_CPU_MAPPING when giving a cpu list

Hao Wang wangh at cse.ohio-state.edu
Wed Oct 6 23:13:13 EDT 2010


Hey Gilles

I'm Hao Wang in MVAPICH2 group. Thanks for your patch about MV2_CPU_MAPPING. We will evaluate it; after that, we will feedback to you.

Thanks

- Hao Wang


From: Gilles Civario<gilles.civario at ichec.ie>
Date: 2010/10/5
Subject: [mvapich-discuss] mvapich2 1.5.1p1 doesn't honour
MV2_CPU_MAPPING when giving a cpu list
To:mvapich-discuss at cse.ohio-state.edu


Hi,

For using efficiently codes parallelised using mixed MPI/OpenMP model,
I usually attach one single MPI process to a set of cores. The OpenMP
threads will thereafter inherit this attachment, and provided I chosen
wisely my cpu set, run in an effective way.
I used to use MV2_CPU_MAPPING to specify the list of core to attach
to, but following the move from PLPA to HWLOC, this environment
variable is no longer properly honoured starting from mvapich2 1.5.
More precisely, only the first given core per MPI process is used to
perform the process to core attachment. As I don't think this is
intended behaviour and as I really miss this feature, I propose here a
patch to allow to specify lists of core in MV2_CPU_MAPPING.
The lists are using decimal numbering of the cores. Each process on
the node will be attached sequentially following there ranks to the
lists given, separated by colons ':'.
Each core are specified by separating then with comas ',' or by giving
ranges which extend is given by a dash '-'.
Eg: MV2_CPU_MAPPING="0,2,5-8:1,3:4" will attach MPI process of lower
rank to cores {0,2,5,6,7,8}, then MPI process of following rank to
cores {1,3}, and then the last one to core 4.
Please review the patch and tell me if it could be integrated.
Cheers.

Gilles




More information about the mvapich-discuss mailing list