[mvapich-discuss] How to lower MPI standard?

Kin Fai Tse kftse20031207 at gmail.com
Wed Dec 3 06:35:09 EST 2014


Dear Ake,

Thanks for your suggestion, the problem is fixed.

Best regards,
Kin Fai

2014-12-03 15:03 GMT+08:00 Åke Sandgren <ake.sandgren at hpc2n.umu.se>:

> On 12/02/2014 11:13 PM, Kin Fai Tse wrote:
>
>> Dear all,
>>
>> I am running VASP 5.3.5 with MVAPICH2.1a and encounter this error for
>> some of the runs:
>>
>> aborting job:
>> Fatal error in PMPI_Allgatherv:
>> Invalid buffer pointer, error stack:
>> PMPI_Allgatherv(1212): MPI_Allgatherv(sbuf=0x9da0cd0, scount=1944,
>> MPI_DOUBLE_COMPLEX, rbuf=0x9d8a050, rcounts=0x97e5710, displs=0x97e57b0,
>> MPI_DOUBLE_COMPLEX, comm=0x84000003) failed
>> PMPI_Allgatherv(1160): Buffers must not be aliased
>>
>> Googling suggest that this is because the code is not MPI 2.2 standards
>> compliance.
>>
>> Here is the link to Peter Larsson's blog for the detail:
>> https://www.nsc.liu.se/~pla/blog/2014/04/30/vasp535/
>>
>> Is there any setting similar to I_MPI_COMPATIBILITY=4 in MVAPICH2?
>>
>
> Fix the VASP code instead.
> --- a/dfast.F
> +++ b/dfast.F
> @@ -325,7 +325,7 @@ MODULE dfast
>           CALL GGEMM( TRANSA, TRANSB, N1,  PGEMM_HANDLE%NCOL(
> PGEMM_HANDLE%COMM%
>                B(1,1+PGEMM_HANDLE%OFFSET( PGEMM_HANDLE%COMM%NODE_ME)),
> LDB, BETA
>
> -         CALL MPI_allgatherv (C(1,1+PGEMM_HANDLE%OFFSET(
> PGEMM_HANDLE%COMM%NODE_
> +         CALL MPI_allgatherv (MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, &
>                C, PGEMM_HANDLE%NCTOT, PGEMM_HANDLE%OFFDATA, MPIDATA,
> PGEMM_HANDL
>        ELSE
>           WRITE(*,*) 'internal error in PARALLEL_GGEMM: the second matrix
> needs
>
>
> --
> Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
> Internet: ake at hpc2n.umu.se   Phone: +46 90 7866134 Fax: +46 90-580 14
> Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20141203/65c4d41c/attachment-0001.html>


More information about the mvapich-discuss mailing list