[mvapich-discuss] Bug: deadlock between ibv_destroy_srq and async_thread

David Kewley David_Kewley at dell.com
Tue Aug 31 15:08:18 EDT 2010


I have a user running a 192-way job using MVAPICH2 1.0.1 and OFED 1.2.5.5,
where MPI_Finalize() does not return.  In the two example jobs I've examined,
189 processes exited, but the other three hung.  The ranks that hung were
different in the two examples, so I don't think the "3" is significant.

All processes I've looked at appear to be stuck in the same way.  In normal
running, each process has four threads.  When the process gets stuck, only the
original thread remains.  Here is a gdb backtrace from one:

#0  0x00000036b2608b3a in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/tls/libpthread.so.0
#1  0x0000002a9595405b in ibv_cmd_destroy_srq (srq=0x82b370) at src/cmd.c:582
#2  0x0000002a962b5419 in mthca_destroy_srq (srq=0x82b3bc) at src/verbs.c:475
#3  0x0000002a9564878e in MPIDI_CH3I_CM_Finalize () from /opt/mvapich2/1.0.1/intel/10.1.015/lib/libmpich.so
#4  0x0000002a955c053b in MPIDI_CH3_Finalize () from /opt/mvapich2/1.0.1/intel/10.1.015/lib/libmpich.so
#5  0x0000002a95626202 in MPID_Finalize () from /opt/mvapich2/1.0.1/intel/10.1.015/lib/libmpich.so
#6  0x0000002a955f7fee in PMPI_Finalize () from /opt/mvapich2/1.0.1/intel/10.1.015/lib/libmpich.so
#7  0x0000002a955f7eae in pmpi_finalize_ () from /opt/mvapich2/1.0.1/intel/10.1.015/lib/libmpich.so
#8  0x0000000000459ff8 in stoprog_ ()
#9  0x000000000047afa6 in MAIN__ ()
#10 0x0000000000405d62 in main ()

After hours of opportunity to study the MVAPICH2 code :), I think I tracked it
down to lines 1302-1306 in rdma_iba_init.c:

            if (MPIDI_CH3I_RDMA_Process.has_srq) {
                pthread_cancel(MPIDI_CH3I_RDMA_Process.async_thread[i]);
                pthread_join(MPIDI_CH3I_RDMA_Process.async_thread[i], NULL);
                ibv_destroy_srq(MPIDI_CH3I_RDMA_Process.srq_hndl[i]);
            }

Consider what would happen if async_thread() were processing a
IBV_EVENT_SRQ_LIMIT_REACHED event when pthread_cancel() was called on
async_thread().  async_thread() has already called ibv_get_async_event()
for this event, but it has not yet called ibv_ack_async_event().  The
result would be the observed deadlock in this part of
ibv_cmd_destroy_srq():

        pthread_mutex_lock(&srq->mutex);
        while (srq->events_completed != resp.events_reported)
                pthread_cond_wait(&srq->cond, &srq->mutex);
        pthread_mutex_unlock(&srq->mutex);

That is, events_completed == events_reported-1 at this point.  The
pthread_cond_signal() would be called, and events_completed could be made
equal to events_reported, only by by calling ibv_ack_async_event() on this
event.  But that will never happen because async_thread() is the only code
that would have done that, and it's already been pthread_cancel()'d and
pthread_join()'d before ibv_destroy_srq() is called.

I think the fix is to add some sort of synchronization between
async_thread() and the code that calls the pthread_cancel() on it.  Do you
think you can work up a fix soon, and forward the patch for testing?

Thanks,
David

-- 
David Kewley
Dell Infrastructure Consulting Services
Onsite Engineer at the Maui HPC Center
Cell Phone: 602-460-7617
David_Kewley at Dell.com

Dell Services: http://www.dell.com/services/
How am I doing? Email my manager Russell_Kelly at Dell.com with any feedback.


More information about the mvapich-discuss mailing list