[mvapich-discuss] Segfault in RMA fetch_and_op with sub communicator
Min Si
msi at anl.gov
Thu Jul 14 11:18:37 EDT 2016
Hi Mingzhe,
Thanks for handling this issue. I am using 2-2.2rc1. And I also find the
same issue in get_accumulate.
Regards,
Min
On 7/13/16 6:40 PM, Mingzhe Li wrote:
> Hi Min,
>
> Thank you for your reproducer. Could you let us know which version of
> MV2 you were using?
>
> Thanks,
> Mingzhe
>
> On Wed, Jul 13, 2016 at 6:42 PM, Min Si <msi at anl.gov
> <mailto:msi at anl.gov>> wrote:
>
> Hi,
>
> I have got a segfault error in the attached RMA program. It does
> following two steps:
> 1) Create a sub communicator from MPI_COMM_WORLD which only
> contains processes inter-connected with rank 0. For example,
> suppose 4 processes in comm world (-np 4 -ppn 2), and rank 0, 1
> are on node 0, rank 2, 3 are on node 1, then the sub communicator
> includes rank 0, 2, 3.
> 2) Allocate window on the sub communicator, and perform all to one
> fetch_and_op with flush.
>
> The segfault error is reported at the first fetch_and_op
> operation. If I use only inter-connected processes (e.g., -np 2
> -ppn 1) or only intra-connected processes (e.g., -np 2 -ppn 2), no
> error happens.
>
> Could you please help me figure out this issue ? Thanks.
>
> Min
>
>
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> <mailto:mvapich-discuss at cse.ohio-state.edu>
> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20160714/0ecfc1a3/attachment.html>
More information about the mvapich-discuss
mailing list