[mvapich-discuss] reduce in place

Krishna Chaitanya Kandalla kandalla at cse.ohio-state.edu
Fri Dec 11 15:10:12 EST 2009


Roland,
            This could be a bug in our code. Can you please try running 
your application with MV2_USE_SHMEM_REDUCE=0. I just tested your sample 
code with this flag and it passed. We will try to fix this bug soon.

Thanks,
Krishna

Roland Schulz wrote:
> Hi,
>
> the below program crashes in MPI_Reduce. It works fine in OpenMPI or 
> an equivalent version without MPI_IN_PLACE. Is MPI_IN_PLACE not 
> supported with MPI_Reduce?
> Or do I call it wrong? Or is this a bug in mvapich?
> It segfaults in 1.0.2 and 1.4 (with ofed 1.1).
>
> Thanks Roland
>
> ==========
> #include <mpi.h>
> #include <stdlib.h>
>
> int main(int argc, char** argv)
> {
>   double nr = 4;
>   double *r = (double*)malloc(nr*sizeof(double));
>   MPI_Comm comm_intra;
>   int rank;
>
>   MPI_Init(&argc,&argv);
>   comm_intra = MPI_COMM_WORLD;
>   MPI_Comm_rank(comm_intra, &rank);
>
>   if (rank==0) {
>     MPI_Reduce(MPI_IN_PLACE,r,nr,MPI_DOUBLE,MPI_SUM,0,
>                comm_intra);
>   } else {
>     MPI_Reduce(r,NULL,nr,MPI_DOUBLE,MPI_SUM,0,comm_intra);
>   }
>   free(r);
>   return 0;
> }
>
> -- 
> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov <http://cmb.ornl.gov>
> 865-241-1537, ORNL PO BOX 2008 MS6309
> ------------------------------------------------------------------------
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>   


More information about the mvapich-discuss mailing list