[mvapich-discuss] Control shouldn't reach here in prototype

Dan DANIEL.M.REDIG at saic.com
Mon Feb 2 19:42:19 EST 2009


Hi Matt,

We're running RHEL 4.6 64 bit, one Mellanox ConnectX DDR card per
system, dual quad core opertons (8 core total) with 16 GB memory.  We've
been using Intel Cluster Toolkit including their MPI successfully with
this code.  I haven't tried OpenMPI lately.

I'll look into getting a section of code to review.  Thanks!

Dan

On Mon, 2009-02-02 at 18:51 -0500, Matthew Koop wrote:
> Dan,
> 
> Can you give us some additional information about the system that this
> is occuring on, such as the OS, HCAs (multiple HCAs?), number of
> processes, etc?
> 
> Also, would it be possible to send along the code (or a part) that
> exhibits this problem?
> 
> Additionally, can you try with the following MV2_USE_COALESCE=0
> environmental variable set?
> 
> Thanks,
> 
> Matt
> 
> On Mon, 2 Feb 2009, Dan wrote:
> 
> > I've looked at the source and attempted to understand this error but I
> > need some help.  The job runs when compiled with other MPIs but not
> > MVAPICH2.  In order to get it to run this far I needed to set the amount
> > of memory available for semaphores a lot higher.
> >
> >
> >
> > [7] Abort: [15] Abort: Control shouldn't reach here in prototype, header
> > 155
> > Control shouldn't reach here in prototype, header 190
> >  at line 276 in file ibv_recv.c
> >  at line 276 in file ibv_recv.c
> > [21] Abort: Control shouldn't reach here in prototype, header 84
> >  at line 276 in file ibv_recv.c
> > [25] Abort: Control shouldn't reach here in prototype, header 212
> >  at line 276 in file ibv_recv.c
> > rank 25 in job 3  cmfpos_36963   caused collective abort of all ranks
> >   exit status of rank 25: killed by signal 9
> > rank 15 in job 3  cmfpos_36963   caused collective abort of all ranks
> >   exit status of rank 15: killed by signal 9
> > rank 7 in job 3  cmfpos_36963   caused collective abort of all ranks
> >   exit status of rank 7: killed by signal 9
> >
> >
> >
> > suggestions?
> >
> > Thank you!
> >
> > Dan
> > _______________________________________________
> > mvapich-discuss mailing list
> > mvapich-discuss at cse.ohio-state.edu
> > http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> >
> 


More information about the mvapich-discuss mailing list