[mvapich-discuss] fetch_and_op with MPI_REPLACE error

Nenad Vukicevic nenad at intrepid.com
Thu Mar 3 12:35:15 EST 2016


I am attaching a simple test case to demonstrate a problem with
MPI_REPLACE.  Its seems that when mvapich is built for fast
(--enable-fast=O2 or O3), MPI_REPLACE acts as MPI_SUM.   The test case
does the following:

for (i = 0; i < 16; ++i)
{
 MPI_Fetch_and_op (&i, &val, MPI_UINT64_T, 1, i, MPI_REPLACE, win);
 MPI_Win_flush (1, win);
 MPI_Fetch_and_op (&i, &val, MPI_UINT64_T, 1, i, MPI_REPLACE, win);
 MPI_Win_flush (1, win);
}

And at the end rank 1 prints its data:

Fast mvapich:
FAST Data on rank 1
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Debug mvapich:
DEBUG Data on rank 1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

We are running Fedora FC23 with Mellanox OFED from Fedora
distribution. GCC is 5.3.1 on FC23.  Fast version is 2.1, while debug
is 2.2b, although 2.2b fast version exhibits the same.

Running fast version on one thread only works as expected. This might
be similar to another problem with segfault on CSWAP operations that I
already reported on fast build.

Can someone try this test on their machine that is not FC 23?

-- 
Nenad
-------------- next part --------------
A non-text attachment was scrubbed...
Name: test-fop-replace.tar.gz
Type: application/x-gzip
Size: 905 bytes
Desc: not available
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20160303/b9feefc1/attachment.gz>


More information about the mvapich-discuss mailing list