[mvapich-discuss] MVAPICH batch Integration Qs

Sayantan Sur surs at cse.ohio-state.edu
Fri May 21 15:45:12 EDT 2010


Hi Michael,


Thanks for your questions. Our answers are below.


> 1) MVAPICH2 and PBS Integration
>
> We are using currently TORQUE/Maui scheduler for the batch MPI jobs. I was
> wondering if MVAPICH2 integrates well with TORUQ/Maui so that the job's
> resources (tasks , memory, etc) can be tracked and monitored for batch job
> resource limit enforcement. Torque uses the Task Management (TM) API to
> submit jobs as opposed to say PMI that MPICH2 uses. Unfortunately, TORQUE
> cannot interoperate well with PMI interface as it does not know which
> processes correspond to a particular MPI job so it cannot track resource
> consumption (cpu time, memory / process and total per job) and thus it
> cannot kill jobs exceeding their requested limits. It cannot also suspend /
> resume an MPI job since it does not know of the participant processes.
>
> Can MVAPICH2 integrate weel with TORQUE?

I am copying this message to Doug Johnson from Ohio Supercomputer
Center. They have enabled MVAPICH2 with PBS/Torque by using OSC's
mpiexec. Thus, MVAPICH2 works with PBS.

http://www.osc.edu/~djohnson/mpiexec/index.php

>
> 2) Does MVAPICH2 help the scheduler place tasks on cores ? Or can we request
> / specify explicit placement or style of placement via MVAPICH2 job launcher
> to the TORQUE system?
>

Yes, we have placement options available.

http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.4.1.html#x1-370006.8

> 3) Does MVAPICH2 automatically use shared memory IPC for intra-node MPI
> tasks and IB for inter-node ones? Can I specify say DAPL or OFED verbs libs
> to use?

Yes, MVAPICH2 automatically uses shared memory for intra-node
communication. Also - you can specify DAPL or OFED libs that you want
to use via configure options.

>
> 4) Which OFED is recommended for your latest MVAPICH2
>

We would recommend using the latest stable OFED so that you get all
the bugfixes in your base InfiniBand stack.

> 5) Any idea when PVAPICH2 1.5 will be out ?
>

We are working on it, and it will be available soon :-)

Thanks for your interest.

> Thank you much ......
>
> Michael
>
> --
> % -------------------------------------------------------------------- \
> % Michael E. Thomadakis, Ph.D.  Senior Lead Supercomputer Engineer/Res \
> % E-mail: miket AT tamu DOT edu                   Texas A&M University \
> % web:    http://alphamike.tamu.edu              Supercomputing Center \
> % Voice:  979-862-3931                    Teague Research Center, 104B \
> % FAX:    979-847-8643                  College Station, TX 77843, USA \
> % -------------------------------------------------------------------- \
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>



-- 
Sayantan Sur

Research Scientist
Department of Computer Science
The Ohio State University.


More information about the mvapich-discuss mailing list