[mvapich-discuss] non-consecutive rank to host mapping?

Jonathan Perkins perkinjo at cse.ohio-state.edu
Mon Apr 18 13:50:47 EDT 2011


Pawel:
Vaibhav is correct.  However the hostname:numprocs syntax is not
currently supported by mpirun_rsh.

On Mon, Apr 18, 2011 at 1:14 PM, vaibhav dutt
<vaibhavsupersaiyan9 at gmail.com> wrote:
> Hi,
>
> Can you provide the contents of your hostfile. It seems your are
> using cyclic rank placement.
> For block rank placement, you should write the hostname no. of times as much
> as there are
> cores in a host i.e
>
> host1
> host1
> host1
> host1
> host2
> host2
> host2
> host2
>
> will put rank 0,1,2,3 on host1 and 4,5,6,7 on host 2.
> Whereas
>
> host1:4
> host2:4
>
> will put rank 0,2,4,6 on host1 and 1,3,5,7 on host2.
>
> On Mon, Apr 18, 2011 at 11:55 AM, Pawel Dziekonski <dzieko at wcss.pl> wrote:
>>
>> Hello,
>>
>> we are using MVAPICH2 1.5.1 p1. This is a pretty standard compilation
>> using intel composerxe-2011.1.107 compilers and OFED-1.5.2 on
>> Scientific Linux x86_64.
>>
>> I have just compiled MOLCAS 7.6 and it complies about:
>>
>> It appears that tasks allocated on the same host machine do not have
>> consecutive message-passing IDs/numbers. This is not acceptable
>> to the ARMCI library as it prevents SMP optimizations and would
>> lead to poor resource utilization.
>>
>> Since MVAPICH2 uses by default HWLOC so I have tried to use variables
>> like:
>> MV2_ENABLE_AFFINITY=1
>> MV2_CPU_BINDING_POLICY=bunch
>>
>> but I always get odd ranks on 1st host and even ranks on second host.
>> Small test:
>>
>> mpirun_rsh -np 8 -hostfile $PBS_NODEFILE env MV2_ENABLE_AFFINITY=1
>> MV2_CPU_BINDING_POLICY=bunch ~/soft/mpi/hostname.mpi.x
>> Hello world!  I am MPI process number: 4 on host wn472
>> Hello world!  I am MPI process number: 0 on host wn472
>> Hello world!  I am MPI process number: 6 on host wn472
>> Hello world!  I am MPI process number: 2 on host wn472
>> Hello world!  I am MPI process number: 7 on host wn480
>> Hello world!  I am MPI process number: 1 on host wn480
>> Hello world!  I am MPI process number: 3 on host wn480
>> Hello world!  I am MPI process number: 5 on host wn480
>>
>>
>> Using MV2_USE_SHARED_MEM=0 slows down MOLCAS a lot.
>>
>> any hints?
>>
>> thanks in advance, Pawel
>>
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>



-- 
Jonathan Perkins
http://www.cse.ohio-state.edu/~perkinjo



More information about the mvapich-discuss mailing list