[mvapich-discuss] Seg fault in MPI_Finalize with many windows

James Dinan dinan at mcs.anl.gov
Fri Oct 8 18:11:30 EDT 2010


Hi Sreeram,

Thanks for your help!

Setting MV2_MAX_NUM_WIN definitely will help us work around this for 
now.  Unfortunately, I think that in general it will be difficult for us 
to determine an up-front cap on the number of windows.  It would be 
helpful if the runtime is able to dynamically increase the maximum 
number of windows - even if it means doing something suboptimal like 
re-allocating structures to a larger size and copying data around.  That 
said, I'm ignorant of the underlying implementation so if this doesn't 
make sense, please ignore.

Thanks again,
  ~Jim.

On 10/08/2010 04:49 PM, sreeram potluri wrote:
> Hi Jim,
>
> Thank you for reporting the problem. MVAPICH2 allows for increasing the
> maximum number of concurrent windows, at run-time. You can do this by
> setting the MV2_MAX_NUM_WIN parameter.
>
> Example : ./mpirun_rsh -np 2 node1 node2 MV2_MAX_NUM_WIN=100 window_test
> http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.5.1.html#x1-12100011.30
>
> However the overflow showing up as a segmentation fault during finalize
> is not a good side effect. We will enhance error handling for this
> scenario soon.
>
> Please let us know if you see any other issues or have any other questions.
>
> Thank you
> Sreeram Potluri
>
> On Fri, Oct 8, 2010 at 5:13 PM, James Dinan <dinan at mcs.anl.gov
> <mailto:dinan at mcs.anl.gov>> wrote:
>
>     Hi,
>
>     I'm getting a seg fault during MPI_Finalize after creating more than
>     64 windows.  I have attached a small test code that exercises the
>     bug, it creates N windows and then frees them.
>
>     I first noticed the bug with mvapich2-1.4.1-intel on Glenn at OSC
>     where it happens with more than 16 windows and confirmed it with
>     mvapich2 1.5.1p1 where it happens with more than 64 windows.
>
>     This limitation on the number of windows will be a problem for an
>     application we're working on here.  Any help would be greatly
>     appreciated.
>
>     Thanks,
>       ~Jim.
>
>     _______________________________________________
>     mvapich-discuss mailing list
>     mvapich-discuss at cse.ohio-state.edu
>     <mailto:mvapich-discuss at cse.ohio-state.edu>
>     http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>



More information about the mvapich-discuss mailing list