[mvapich-discuss] mvapich2-0.9.8 blacs problems

wei huang huanwei at cse.ohio-state.edu
Mon Mar 19 09:22:35 EDT 2007


Hi,

Thanks for letting us know the problem.

We will take a look at it and get back to you.

Thanks.

Regards,
Wei Huang

774 Dreese Lab, 2015 Neil Ave,
Dept. of Computer Science and Engineering
Ohio State University
OH 43210
Tel: (614)292-8501


On Mon, 19 Mar 2007, Bas van der Vlies wrote:

> wei huang wrote:
> > Hi,
> >
> > Thanks for letting us know the problem. We have generated a patch to
> > address this problem, and have applied it to both the trunk and our svn
> > 0.9.8 branch.
> >
> >
> We have done some more tests and found some other problem using mvapich2
> and blacs. This problems are encountered by user programs. We get
> reports from our users that they get wrong answers from their programs.
>
> We have made a small fortran (g77) to illustrate a problem.
> The calls a number of times the same scalapack routine. Independent of
> the size of the problem the program hangs after 8 or 31 iterations
> except when number of processes is a square, eg 1x1, 2x2, ...
>
> How to compile the program:
> mpif77 -Wall -g -O0 -o scal scal.f -lscalapack -lfblacs -lcblacs -lblacs
> -llapack -latlas
>
> The program expects on standard input:
> <size of matrix> <block size> <number of iterations>
>
> for example:
>   echo '100 16 100' | mpiexec -n <np> ./scal
>
> Regards
>
>
> PS) This program behaves correctly with topspin/ciso software which is
> based on their infiniband stack and bases on mvapich1 version.
>
> We gona test the program in mvapiach1 from OSU
> --
> ********************************************************************
> *                                                                  *
> *  Bas van der Vlies                     e-mail: basv at sara.nl      *
> *  SARA - Academic Computing Services    phone:  +31 20 592 8012   *
> *  Kruislaan 415                         fax:    +31 20 6683167    *
> *  1098 SJ Amsterdam                                               *
> *                                                                  *
> ********************************************************************
>



More information about the mvapich-discuss mailing list