[mvapich-discuss] MPI+OpenMP affinity across multiple sockets

Filippo SPIGA fs395 at cam.ac.uk
Tue Mar 10 18:21:01 EDT 2015


Dear MVAPICH2 experts,

I know it may sounds not an optimal solution but for the sake of benchmarking and performance exploration I need to understand what is the correct affinity setting for MVAPICH2. I want to have 1 MPI per node and 12 OpenMP threads split across two sockets. What is the correct settings for MV2_CPU_BINDING_LEVEL & MV2_CPU_BINDING_POLICY? What about MV2_CPU_BINDING_LEVEL & MV2_CPU_MAPPING?

Thanks in advance

Cheers,
Filippo

--
Mr. Filippo SPIGA, M.Sc. - HPC  Application Specialist
High Performance Computing Service, University of Cambridge (UK)
http://www.hpc.cam.ac.uk/ ~ http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*****
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and may be privileged or otherwise protected from disclosure. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality and to advise the sender immediately of any error in transmission."




More information about the mvapich-discuss mailing list