[mvapich-discuss] wrapping calls to polling functions in MVAPICH2

Alexander Breslow abreslow at cs.ucsd.edu
Mon Aug 12 19:12:08 EDT 2013


Hi Jeff,

Thanks again for your time.  I'm using mpiP for now.  We'll see how far
that gets me.  Todd Gamblin's Wrap-Master (https://github.com/tgamblin/wrap)
will be the next step if mpiP doesn't suffice.

-Alex


On Mon, Aug 12, 2013 at 4:05 PM, Jeff Hammond <jeff.science at gmail.com>wrote:

> On IB, most of the useful work in MPI is NIC-driven RDMA, no?  I'm not
> sure how you'll measure that, especially when it's overlapped with
> something on the CPU side.
>
> If you're interested in implicit vs. explicit time spent in
> collectives, one crude way to measure the implicit time is:
>
> MPI_Collective(..,MPI_Comm comm)
> {
>   PMPI_Barrier(comm);
>   PMPI_Collective(..,comm);
> }
>
> You will find that in some cases that this perturbs the explicit time,
> so it's a performance Schrodinger's cat.
>
> Anyways, good luck and welcome to the performance analysis twilight zone
> :-)
>
> Jeff
>
> On Mon, Aug 12, 2013 at 4:11 PM, Alexander Breslow <abreslow at cs.ucsd.edu>
> wrote:
> > Actually, you may disregard my question.  I have thought of another way
> to
> > accomplish what I need to do.
> >
> > -Alex
> >
> >
> > On Mon, Aug 12, 2013 at 1:29 PM, Alexander Breslow <abreslow at cs.ucsd.edu
> >
> > wrote:
> >>
> >> Hi Jeff,
> >>
> >> The problem is that I don't want to explicitly time high-level functions
> >> defined in mpi.h but low-level ones; in particular, I'm not directly
> >> concerned about the duration of say an MPI_Send or MPI_Recv but what
> >> percentage of total execution time was spent polling waiting to do
> useful
> >> work for each MPI process.  For asynchronous calls such MPI_Isend, I
> assume
> >> that this can be relatively easily captured by timing MPI_Wait and all
> its
> >> derivatives.  However, I am also interested in timing the waiting that
> >> occurs in blocking calls such as MPI_Recv.
> >>
> >> I realize that this is likely highly implementation dependent, but I am
> >> okay with that.
> >>
> >> -Alex
> >>
> >>
> >> On Mon, Aug 12, 2013 at 12:55 PM, Jeff Hammond <jeff.science at gmail.com>
> >> wrote:
> >>>
> >>> polling and profiling are different things.  your example is for
> >>> profiling and PMPI is the right way to do that.  you should avoid
> >>> reinventing the wheel and look at mpiP, TAU, etc. though.
> >>>
> >>> if you want to implement asynchrony via polling, then it would help if
> >>> you gave a clear example of what you want in that respect.
> >>>
> >>> jeff
> >>>
> >>> On Mon, Aug 12, 2013 at 2:36 PM, Alexander Breslow <
> abreslow at cs.ucsd.edu>
> >>> wrote:
> >>> > Hi,
> >>> >
> >>> > I was wondering if there is a single low-level function within
> MVAPICH2
> >>> > that
> >>> > implements the polling functionality.  I have seen that polling is
> >>> > adjustable via the MV2 CM SPIN COUNT environment variable.  My goal
> is
> >>> > to be
> >>> > able to time all spinning explicitly.
> >>> >
> >>> > If this spinning is implemented by a call to a single function F, I
> >>> > would
> >>> > like to intercept all calls to that function, and redirect those
> calls
> >>> > to
> >>> > another function G that times the duration of each invocation of F.
> >>> >
> >>> > G would be something like the following:
> >>> >
> >>> > G(args){
> >>> >
> >>> > t1 = get_time_ns();
> >>> > F(args);
> >>> > t2 = get_time_ns();
> >>> > register(t2,t1);  // Enqueue for post processing
> >>> >
> >>> > }
> >>> >
> >>> > This seems feasible via the PMPI interface or by writing a library
> that
> >>> > uses
> >>> > DYLD functionality if the spinning/polling function appears in the
> >>> > symbol
> >>> > table of the respective MVAPICH2 object files.  If this is not
> >>> > possible,
> >>> > could you please tell me which source files that I would have to
> modify
> >>> > in
> >>> > order to achieve what I desire?
> >>> >
> >>> > Thanks in advance for your time,
> >>> > Alex
> >>> >
> >>> >
> >>> > _______________________________________________
> >>> > mvapich-discuss mailing list
> >>> > mvapich-discuss at cse.ohio-state.edu
> >>> > http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> >>> >
> >>>
> >>>
> >>>
> >>> --
> >>> Jeff Hammond
> >>> jeff.science at gmail.com
> >>
> >>
> >
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20130812/ddd13399/attachment.html


More information about the mvapich-discuss mailing list