[Mvapich-discuss] Segmentation fault if MPI_Finalize is called without freeing objects with attributes attached
Giordano, Mose
m.giordano at ucl.ac.uk
Mon Jan 2 10:02:11 EST 2023
!-------------------------------------------------------------------|
This Message Is From an External Sender
This message came from outside your organization.
|-------------------------------------------------------------------!
Hi Hari,
> Could you please let us know if this is a single node run or a multi-node run?
I can reproduce with both single- and multi-node (tested with two) runs.
> Could you also let us know how MVAPICH was configured (output of mpiname -a) and what compiler versions were used?
I can provide the information you requested. We can reproduce the reported bug with both MVAPICH 2.3.6 built with GCC 11.1.0 and 2.3.7 built with GCC 12.1.0. Configurations:
MVAPICH2 2.3.6 Mon March 29 22:00:00 EST 2021 ch3:mrail
Compilation
CC: gcc -DNDEBUG -DNVALGRIND -O2
CXX: g++ -DNDEBUG -DNVALGRIND -O2
F77: gfortran -fallow-argument-mismatch -O2
FC: gfortran -O2
Configuration
--prefix=/lustre/software/mvapich2/gcc11/2.3.6 --with-knem=/opt/knem-1.1.3.90mlnx1 --with-hcoll=/opt/mellanox/hcoll --enable-fortran=all --enable-cxx --with-file-system=lustre --with-slurm=/cm/shared/apps/slurm/current --with-pm=slurm --with-pmi=pmi1 --with-device=ch3:mrail --with-rdma=gen2
----------
MVAPICH2 2.3.7 Wed March 02 22:00:00 EST 2022 ch3:mrail
Compilation
CC: gcc -DNDEBUG -DNVALGRIND -O2
CXX: g++ -DNDEBUG -DNVALGRIND -O2
F77: gfortran -w -fallow-argument-mismatch -O2 -O2
FC: gfortran -O2
Configuration
--prefix=/lustre/software/mvapich2/gcc12/2.3.7 --with-knem=/opt/knem-1.1.4.90mlnx1 --with-hcoll=/opt/mellanox/hcoll --enable-fortran=all --enable-cxx --with-file-system=lustre --with-slurm=/cm/shared/apps/slurm/current --with-pm=slurm --with-device=ch3:mrail --with-pmi=pmi1 -with-rdma=gen2
Best,
Mosè
More information about the Mvapich-discuss
mailing list