[mvapich-discuss] Re: MVAPICH and SLURM affinity settings
stephen mulcahy
smulcahy at atlanticlinux.ie
Thu Oct 28 09:26:39 EDT 2010
Hi,
If I modify my script as follows
....
export VIADEV_USE_AFFINITY=1
export VIADEV_CPU_MAPPING=0:2:3:4:1:5:6:7
srun --mpi=mvapich ${ROMS_DIR}/runme.csh ${ROMS_BIN} ${INFILE}
....
then the affinity settings work as expected (which makes sense on
reflection).
Sorry for wasting anyones time, maybe of use to someone else in the future.
-stephen
stephen mulcahy wrote:
> Hi,
>
> I've just started testing MVAPICH in our environment as an alternative
> to our current MPI library.
>
> We would like to use MVAPICH's affinity settings but are using MVAPICH
> via SLURM so we normally submit jobs in the following way
>
> We have a shell script like the following (run-model.sh)
>
> .....
>
> VIADEV_USE_AFFINITY=1
> VIADEV_CPU_MAPPING=0:2:3:4:1:5:6:7
> srun --mpi=mvapich ${ROMS_DIR}/runme.csh ${ROMS_BIN} ${INFILE}
>
> .....
>
> which we submit to SLURM with sbatch run-model.sh
>
> As you can see above, I've tried to use the affinity settings to MVAPICH
> by setting them in the script.
>
> But when I monitor the running jobs with htop, I can see the model tasks
> moving between different processor cores.
>
> We're using MVAPICH 1.2 RC1
>
> Is this the correct way of configuring MVAPICH to use the affinity
> settings or am I missing something?
>
> Thanks,
>
>
--
Stephen Mulcahy Atlantic Linux http://www.atlanticlinux.ie
Registered in Ireland, no. 376591 (144 Ros Caoin, Roscam, Galway)
More information about the mvapich-discuss
mailing list