[mvapich-discuss] mpiexec/mvapich places processes on same cpu

LEI CHAI chai.15 at osu.edu
Tue Mar 4 10:52:38 EST 2008


Hi,

We are glad to inform you that MVAPICH-1.0, released last week, provides flexible user defined cpu mapping. Use your example below, if you want to distribute the processes on core 1, 2, 5, and 6, you can run the program like this:

$ mpirun_rsh -np N -hostfile hosts VIADEV_CPU_MAPPING=1,2,5,6 ./a.out

More information can be found in section 5.11 of the user guide:

http://mvapich/support/mvapich_user_guide.html

Lei


----- Original Message -----
From: LEI CHAI <chai.15 at osu.edu>
Date: Sunday, January 20, 2008 8:20 pm
Subject: Re: [mvapich-discuss] mpiexec/mvapich places processes on same cpu

> Hi Joseph and Pasha,
> 
> Thank you for the suggestions. We already plan to add this feature 
> to MVAPICH. It will be available in upcoming releases.
> 
> Lei
> 
> 
> ----- Original Message -----
> From: "Pavel Shamis (Pasha)" <pasha at dev.mellanox.co.il>
> Date: Sunday, January 20, 2008 4:45 am
> Subject: Re: [mvapich-discuss] mpiexec/mvapich places processes on 
> same cpu
> 
> > 
> > > While we are at it - is there an actual way to assign with 
> > either version to specific cores? We have a few large multinode 
> > obs that can only use 4 out of 8 cores per node due to bus and 
> > memory limitations. Is there a way to distribute the processes to 
> > cores 1,2 and 5,6 ? ie to skip 0, and split the other 4 to 
> > different chips? 
> > >   
> > Sounds good. As we have a option to specify HCA/port per rank it 
> > may be 
> > nice to have option to specify core. Should not be very 
> > complicated to 
> > implement. (I'm talking about mvapich1)
> > Mvapich team, what do you think ?
> > 
> > Pasha
> > > j
> > >
> > > ----- Original Message -----
> > > From: LEI CHAI <chai.15 at osu.edu>
> > > Date: Friday, January 18, 2008 7:00 pm
> > > Subject: Re: [mvapich-discuss] mpiexec/mvapich places processes 
> > on same cpu
> > >
> > >   
> > >> Hi Joseph,
> > >>
> > >> Could you try disable the cpu affinity feature in 
> > mvapich/mvapich2, e.g.
> > >>
> > >> mvapich2:
> > >> $ mpiexec -n 4 -env MV2_ENABLE_AFFINITY 0 ./a.out
> > >>
> > >> or mvapich:
> > >> $ mpirun_rsh -np 4 VIADEV_ENABLE_AFFINITY=0 ./a.out
> > >>
> > >> Thanks,
> > >> Lei
> > >>
> > >>
> > >> ----- Original Message -----
> > >> From: Joseph Hargitai <joseph.hargitai at nyu.edu>
> > >> Date: Friday, January 18, 2008 6:05 pm
> > >> Subject: [mvapich-discuss] mpiexec/mvapich places processes on 
> > same cpu
> > >>
> > >>     
> > >>> hi all:
> > >>>
> > >>> While submitting two identical mpi jobs (-np 4) with 
> > differetnt 
> > >>> datasets for a dual socket quadocore node using two distinct 
> > >>> pbs/mpiexec submission both jobs end up on the first 
> > processor, 
> > >>> such they use 4 cores of the first cpu, none of the second. 
> > This 
> > >>> results in 8 processes on cpu 1, with a load of about 8-9, 
> > both 
> > >>> jobs producing output okay, but obviously the choice would be 
> > to 
> > >>> have them on distinct cpus. 
> > >>>
> > >>> When one of these mpi jobs meet 4 others regular serial jobs 
> > >>> submitted without mpiexec, all 8 cores are populated. 
> > >>>
> > >>> I did read on the group list about the first mpiexec session 
> > being 
> > >>> the master, but not reserving the first 4 cores such allowing 
> > the 
> > >>> possibility for the next mpi job to end up on the same cpu. 
> > >>>
> > >>> best,   
> > >>> joseph
> > >>> _______________________________________________
> > >>> mvapich-discuss mailing list
> > >>> mvapich-discuss at cse.ohio-state.edu
> > >>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> > >>>
> > >>>       
> > > _______________________________________________
> > > mvapich-discuss mailing list
> > > mvapich-discuss at cse.ohio-state.edu
> > > http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> > >
> > >   
> > 
> > 
> > -- 
> > Pavel Shamis (Pasha)
> > Mellanox Technologies
> > 
> > 
> 
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> 



More information about the mvapich-discuss mailing list