[mvapich-discuss] different mpiexec options

nilesh awate nilesh_awate at yahoo.com
Wed Jan 2 03:50:02 EST 2008


Hi Lei,
thanks a bunch, my problem has been solved . . .
actually i had seen that machine file option in help, but i did't  find much(how to spacify)
about it in man mpiexec 
thanks & regards,
 Nilesh Awate

C-DAC R&D



----- Original Message ----
From: LEI CHAI <chai.15 at osu..edu>
To: nilesh awate <nilesh_awate at yahoo.com>
Cc: mvapich-discuss at cse..ohio-state.edu
Sent: Tuesday, 1 January, 2008 12:37:56 AM
Subject: Re: [mvapich-discuss] different mpiexec options

Hi Nilesh,

You can map processes to machines by using the -machinefile option. For
 example, suppose you have four nodes, m[1-4], and you want to run 4
 processes on a single node, say m1, without modifying mpd.hosts you can
 run the program like this:

$ mpiexec -machinefile ./mf -n 4 ./a.out

where mf is a file containing the machine mapping, e.g.

$ cat mf
m1
m1
m1
m1

And if you want to run 4 processes on 2 nodes, then mf may look like
 this:

$ cat mf
m1
m2
m1
m2

More information about running mvapich2 can be found from mvapich2 user
 guide:

http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2.html

Thanks,
Lei

Content-Type: multipart/alternative;
 boundary="0-2031564087-1199096938=:11089"


--0-2031564087-1199096938=:11089
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hi all,=0A=0AI'm using mvapich2-1.0.1 with OFED1.2 udapl stack=0Ai've
 setup=
 4 nodes & using them.=0Abut when i run foll commmand=0Ampiexec -n 4
 ./mpit=
st=0Ampitst get executed all 4 nodes . .. .=0Acan i restrict its
 execution t=
o only 2 nodes( without reducing node number in mpd.hosts))=0Aby
 spacifying=
 option while running mpiexec ?=0Awhich different option we can give to
 mpi=
exec ?=0Asuupose i want to run 4 instance of executable on single node
 havi=
ng quadra core cpu=0Ahow can i tell it to mpiexec to run it on single
 node =
,let the other node remain idle.=0A=0Awaiting for reply=0Aregards,=0A
 Niles=
h Awate=0AC-DAC R&D=0A=0A=0A=0A=0A=0A=0A      Get the freedom to save
 as ma=
ny mails as you wish. To know how, go to
 http://help.yahoo.com/l/in/yahoo/m=
ail/yahoomail/tools/tools-08.html
--0-2031564087-1199096938=:11089
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head><style type=3D"text/css"><!-- DIV {margin:0px;}
 --></style></he=
ad><body><div style=3D"font-family:times new roman, new york, times,
 serif;=
font-size:12pt">Hi all,<br><br>I'm using mvapich2-1.0.1 with OFED1.2
 udapl =
stack<br>i've setup 4 nodes &amp; using them.<br>but when i run foll
 commma=
nd<br>mpiexec -n 4 ./mpitst<br>mpitst get executed all 4 nodes . .
 .<br><sp=
an style=3D"font-weight: bold;">can i restrict its execution to only 2
 node=
s( without reducing node number in mpd.hosts))<br></span>by spacifying
 opti=
on while running mpiexec ?<br>which different option we can give to
 mpiexec=
 ?<br>suupose i want to run 4 instance of executable on single node
 having =
quadra core cpu<br>how can i tell it to mpiexec to run it on single
 node ,l=
et the other node remain idle.<br><br>waiting for
 reply<br>regards,<br>&nbs=
p;<span style=3D"font-weight: bold; color: rgb(64, 160, 255);"><span
 style=
=3D"color: rgb(0, 128, 255);">Nilesh Awate</span><br style=3D"color:
 rgb(0,=
 128,
 255);"><span style=3D"color: rgb(0, 128, 255);">C-DAC
 R&amp;D<br><br></spa=
n></span><div><br></div></div><br>=0A=0A=0A      <!--10--><hr
 size=3D1></hr=
> Chat on a cool, new interface. No download required. <a
 href=3D"http://in=
..rd.yahoo.com/tagline_webmessenger_10/*http://in.messenger.yahoo.com/webmes=
sengerpromo.php">Click here.</a></body></html>
--0-2031564087-1199096938=:11089--






      Now you can chat without downloading messenger. Go to http://in.messenger.yahoo.com/webmessengerpromo.php
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20080102/4e9db8c5/attachment-0001.html


More information about the mvapich-discuss mailing list