[mvapich-discuss] non-consecutive rank to host mapping?

Pawel Dziekonski dzieko at wcss.pl
Mon Apr 18 12:55:31 EDT 2011


Hello,

we are using MVAPICH2 1.5.1 p1. This is a pretty standard compilation
using intel composerxe-2011.1.107 compilers and OFED-1.5.2 on
Scientific Linux x86_64.

I have just compiled MOLCAS 7.6 and it complies about:

It appears that tasks allocated on the same host machine do not have
consecutive message-passing IDs/numbers. This is not acceptable 
to the ARMCI library as it prevents SMP optimizations and would
lead to poor resource utilization.

Since MVAPICH2 uses by default HWLOC so I have tried to use variables
like:
MV2_ENABLE_AFFINITY=1
MV2_CPU_BINDING_POLICY=bunch

but I always get odd ranks on 1st host and even ranks on second host.
Small test:

mpirun_rsh -np 8 -hostfile $PBS_NODEFILE env MV2_ENABLE_AFFINITY=1 MV2_CPU_BINDING_POLICY=bunch ~/soft/mpi/hostname.mpi.x
Hello world!  I am MPI process number: 4 on host wn472
Hello world!  I am MPI process number: 0 on host wn472
Hello world!  I am MPI process number: 6 on host wn472
Hello world!  I am MPI process number: 2 on host wn472
Hello world!  I am MPI process number: 7 on host wn480
Hello world!  I am MPI process number: 1 on host wn480
Hello world!  I am MPI process number: 3 on host wn480
Hello world!  I am MPI process number: 5 on host wn480


Using MV2_USE_SHARED_MEM=0 slows down MOLCAS a lot.

any hints?

thanks in advance, Pawel




More information about the mvapich-discuss mailing list