[mvapich-discuss] How to Specify Processes Per Node With mvapich

Stephen Cousins steve.cousins at maine.edu
Tue Apr 22 16:45:28 EDT 2014


Sorry for my last email. It turns out that we are using mpiexec.hydra as
opposed to mpirun_rsh. Our mpirun command is a link to mpiexec.hydra. ppn
works with that.


On Tue, Apr 22, 2014 at 2:51 PM, Jonathan Perkins <
perkinjo at cse.ohio-state.edu> wrote:

> Yes Susan, you are correct.  We'll look into adding this option to
> mpirun_rsh.
>
> On Tue, Apr 22, 2014 at 2:06 PM, Susan A. Schwarz
> <Susan.A.Schwarz at dartmouth.edu> wrote:
> > Hi Jonathan,
> >
> > Thanks for your email. So you are saying that I need to make my own
> hostfile
> > and since I am using torque, I am going to have to have my submit script
> > read the $PBS_NODEFILE and then make a hostfile to use with the
> mpirun_rsh
> > command. Is that correct?
> >
> > It would be nice to have a "ppn" command line option in a future version!
> >
> > Susan
> >
> >
> >
> > On 04/22/2014 01:30 PM, Jonathan Perkins wrote:
> >>
> >> Hello Susan.
> >>
> >> In order to control the number of processes per node with mpirun_rsh
> >> you will need to provide a hostfile and use the extended syntax for
> >> each hostname.
> >>
> >> For example:
> >> $ cat hostfile
> >> node1:8
> >> node2:8
> >> node3:8
> >> node4:8
> >>
> >> When using this hostfile mpirun_rsh will place 8 processes for each
> >> node.  Please see section 5.2.1 of our userguide for more information
> >> on how to use this
> >>
> >> (
> http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.9.html#x1-250005.2.1
> ).
> >>
> >> I think that you will also be happy to find that mvapich2 also
> >> provides the same hydra (mpiexec) as mpich so that you can run your
> >> mpich and mvapich2 jobs using the same process.  Section 5.2.2 of our
> >> userguide provides a brief overview of this as well
> >>
> >> (
> http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.9.html#x1-330005.2.2
> ).
> >>
> >> On Tue, Apr 22, 2014 at 12:26 PM, Susan A. Schwarz
> >> <Susan.A.Schwarz at dartmouth.edu> wrote:
> >>>
> >>> I am running mvapich2 v1.9 on a CentOS 6.5 cluster with torque
> 4.2.6.1. I
> >>> need to be able to be able to specify the number of processes per node
> >>> when
> >>> I start my mvapich program. If I was using mpich and starting my
> program
> >>> with mpirun or mpiexec,  I would use the "-ppn" option.  I don't see
> any
> >>> arguments similar to that when I use mpirun_rsh to start my mvapich
> >>> program.
> >>>
> >>> We used to use the OSU mpiexec v0.84  before we upgraded our cluster to
> >>> use
> >>> Centos 6.5 and torque 4.2.6.1 but now I can't get my mvapich2 programs
> to
> >>> launch with mpiexec so I can't take advantage of the "-npernode"
> option.
> >>> I
> >>> get a fatal error in MPI_Init when I launch my mvapich2 program with
> >>> mpiexec.
> >>>
> >>> Is there a way to start my mvapich program and specify the number of
> >>> processes per node.
> >>>
> >>> thank you,
> >>> Susan Schwarz
> >>> Research Computing
> >>> Dartmouth College
> >>>
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> mvapich-discuss mailing list
> >>> mvapich-discuss at cse.ohio-state.edu
> >>> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> >>>
> >>>
> >>
> >>
> >
> >
> >
>
>
>
> --
> Jonathan Perkins
> http://www.cse.ohio-state.edu/~perkinjo
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>



-- 
________________________________________________________________
 Steve Cousins             Supercomputer Engineer/Administrator
 Advanced Computing Group            University of Maine System
 244 Neville Hall (UMS Data Center)              (207) 561-3574
 Orono ME 04469                      steve.cousins at maine.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20140422/8f7eb537/attachment.html>


More information about the mvapich-discuss mailing list