[mvapich-discuss] Seg fault in MPI_Finalize with many windows

sreeram potluri potluri at cse.ohio-state.edu
Fri Oct 8 17:49:09 EDT 2010


Hi Jim,

Thank you for reporting the problem. MVAPICH2 allows for increasing the
maximum number of concurrent windows, at run-time. You can do this by
setting the MV2_MAX_NUM_WIN parameter.

Example : ./mpirun_rsh -np 2 node1 node2 MV2_MAX_NUM_WIN=100 window_test
http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.5.1.html#x1-12100011.30

However the overflow showing up as a segmentation fault during finalize is
not a good side effect. We will enhance error handling for this scenario
soon.

Please let us know if you see any other issues or have any other questions.

Thank you
Sreeram Potluri

On Fri, Oct 8, 2010 at 5:13 PM, James Dinan <dinan at mcs.anl.gov> wrote:

> Hi,
>
> I'm getting a seg fault during MPI_Finalize after creating more than 64
> windows.  I have attached a small test code that exercises the bug, it
> creates N windows and then frees them.
>
> I first noticed the bug with mvapich2-1.4.1-intel on Glenn at OSC where it
> happens with more than 16 windows and confirmed it with mvapich2 1.5.1p1
> where it happens with more than 64 windows.
>
> This limitation on the number of windows will be a problem for an
> application we're working on here.  Any help would be greatly appreciated.
>
> Thanks,
>  ~Jim.
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20101008/569da36c/attachment.html


More information about the mvapich-discuss mailing list