[mvapich-discuss] MPI 3.0 - problem in neigborhood collective communication

Phanisri Pradeep Pratapa ppratapa at gatech.edu
Wed Aug 5 19:05:21 EDT 2015


Hi,

I am a beginner to MPI. I have found an issue in MPI 3.0 and did not know
the right platform to address the it.

*Summary:*
I am facing a problem when using 1 or 2 processors per direction (with
Cartesian topology for communication). The output buffer data comes out in
swapped order compared to input buffer data. This problem does not arise
when using more than 2 processors in each direction.


*Details:*
I am trying to implement Poisson solver (periodic boundary conditions)
using MPI 3.0. I am using finite differences and created a Cartesian
topology of processors in a cubical domain using
"MPI_Dist_graph_create_adjacent". I am using MPI_Neighbor_alltoall for
communication and am facing a problem when I use only 2 processors or 1
processor for each direction of the domain. This is the case where, for any
given processor, the source and destination processors for communication
are the same (because of periodic boundary). In this case, I am getting the
output data from the collective in a swapped order. If I use 3 or more
processors for each direction of domain (i.e 3^3, 4^3 or more), then I do
not face any data swapping, probably because the source and destination
processor are unique, for any given processor, in any given direction.

For now, the work around I am using is to read the output data in a swapped
order when I am using 1 or 2 processors in each direction.



Please let me know if this is a bug in MPI or I am not doing something
correctly. Also kindly let me know if you need further information
regarding the problem I am facing.

Thank you,

Best Regards,

Phanisri Pradeep Pratapa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20150805/bacd0217/attachment.html>


More information about the mvapich-discuss mailing list