[mvapich-discuss] Bug: deadlock between ibv_destroy_srq and async_thread

Dhabaleswar Panda panda at cse.ohio-state.edu
Tue Aug 31 16:09:54 EDT 2010


Hi David,

This is a follow-up reply.

I looked at the Changelog for MVAPICH2. You can check this out from the
following URL:

http://mvapich.cse.ohio-state.edu/download/mvapich2/changes.shtml

We have the following entry in MVAPICH2 1.0.3 release Changelog
(06/10/08):

=======================================================================
  Add additional synchronization before pthread_cancel() call in
  finalization to avoid killing the thread that is supposed to
  acknowledge outstanding IB events. Thanks to David Kewley from Dell
  for reporting this issue.
=======================================================================

Looks like you had reported this issue to us earlier and a fix had gone to
MVAPICH2 1.0.3 version. This fix should be there in all follow-up releases
too.

In addition, some additional fixes related to SRQ have gone into the later
releases.

As I indicated in my earlier e-mail, it will be good if you can downlaod
the latest 1.5 version and try it out. You should see better performance
and scalability with the latest version. Let us know if you see any issues
with 1.5 and we will be able to take an in-depth look at it.

Thanks,

DK

On Tue, 31 Aug 2010, Dhabaleswar Panda wrote:

> Hi David,
>
> Thanks for your note and the patch. This is a very old version of MVAPICH2
> (1.0.1, released during Oct 2007). The latest MVAPICH2 release is 1.5. We
> are coming closer to 1.5.1 release in a few weeks. The codebase has
> changed significantly since 1.0.1 and many new enhancements and features
> have been added. It will be very hard for us to provide any support for
> this older 1.0.1 version. May I request you to download the latest 1.5
> version (use the branch version) and try it with this application. If the
> issue still persists with this latest version, we will be happy to take an
> in-depth look at it.
>
> Thanks,
>
> DK
>
>
> On Thu, 22 May 2008, David Kewley wrote:
>
> > I have a user running a 192-way job using MVAPICH2 1.0.1 and OFED 1.2.5.5,
> > where MPI_Finalize() does not return.  In the two example jobs I've examined,
> > 189 processes exited, but the other three hung.  The ranks that hung were
> > different in the two examples, so I don't think the "3" is significant.
> >
> > All processes I've looked at appear to be stuck in the same way.  In normal
> > running, each process has four threads.  When the process gets stuck, only the
> > original thread remains.  Here is a gdb backtrace from one:
> >
> > #0  0x00000036b2608b3a in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/tls/libpthread.so.0
> > #1  0x0000002a9595405b in ibv_cmd_destroy_srq (srq=0x82b370) at src/cmd.c:582
> > #2  0x0000002a962b5419 in mthca_destroy_srq (srq=0x82b3bc) at src/verbs.c:475
> > #3  0x0000002a9564878e in MPIDI_CH3I_CM_Finalize () from /opt/mvapich2/1.0.1/intel/10.1.015/lib/libmpich.so
> > #4  0x0000002a955c053b in MPIDI_CH3_Finalize () from /opt/mvapich2/1.0.1/intel/10.1.015/lib/libmpich.so
> > #5  0x0000002a95626202 in MPID_Finalize () from /opt/mvapich2/1.0.1/intel/10.1.015/lib/libmpich.so
> > #6  0x0000002a955f7fee in PMPI_Finalize () from /opt/mvapich2/1.0.1/intel/10.1.015/lib/libmpich.so
> > #7  0x0000002a955f7eae in pmpi_finalize_ () from /opt/mvapich2/1.0.1/intel/10.1.015/lib/libmpich.so
> > #8  0x0000000000459ff8 in stoprog_ ()
> > #9  0x000000000047afa6 in MAIN__ ()
> > #10 0x0000000000405d62 in main ()
> >
> > After hours of opportunity to study the MVAPICH2 code :), I think I tracked it
> > down to lines 1302-1306 in rdma_iba_init.c:
> >
> >             if (MPIDI_CH3I_RDMA_Process.has_srq) {
> >                 pthread_cancel(MPIDI_CH3I_RDMA_Process.async_thread[i]);
> >                 pthread_join(MPIDI_CH3I_RDMA_Process.async_thread[i], NULL);
> >                 ibv_destroy_srq(MPIDI_CH3I_RDMA_Process.srq_hndl[i]);
> >             }
> >
> > Consider what would happen if async_thread() were processing a
> > IBV_EVENT_SRQ_LIMIT_REACHED event when pthread_cancel() was called on
> > async_thread().  async_thread() has already called ibv_get_async_event()
> > for this event, but it has not yet called ibv_ack_async_event().  The
> > result would be the observed deadlock in this part of
> > ibv_cmd_destroy_srq():
> >
> >         pthread_mutex_lock(&srq->mutex);
> >         while (srq->events_completed != resp.events_reported)
> >                 pthread_cond_wait(&srq->cond, &srq->mutex);
> >         pthread_mutex_unlock(&srq->mutex);
> >
> > That is, events_completed == events_reported-1 at this point.  The
> > pthread_cond_signal() would be called, and events_completed could be made
> > equal to events_reported, only by by calling ibv_ack_async_event() on this
> > event.  But that will never happen because async_thread() is the only code
> > that would have done that, and it's already been pthread_cancel()'d and
> > pthread_join()'d before ibv_destroy_srq() is called.
> >
> > I think the fix is to add some sort of synchronization between
> > async_thread() and the code that calls the pthread_cancel() on it.  Do you
> > think you can work up a fix soon, and forward the patch for testing?
> >
> > Thanks,
> > David
> >
> > --
> > David Kewley
> > Dell Infrastructure Consulting Services
> > Onsite Engineer at the Maui HPC Center
> > Cell Phone: 602-460-7617
> > David_Kewley at Dell.com
> >
> > Dell Services: http://www.dell.com/services/
> > How am I doing? Email my manager Russell_Kelly at Dell.com with any feedback.
> > _______________________________________________
> > mvapich-discuss mailing list
> > mvapich-discuss at cse.ohio-state.edu
> > http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> >
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>



More information about the mvapich-discuss mailing list