[Mvapich-discuss] mvapich 3.0b srun start failure

christof.koehler at bccms.uni-bremen.de christof.koehler at bccms.uni-bremen.de
Fri May 19 02:31:34 EDT 2023


!-------------------------------------------------------------------|
  This Message Is From an External Sender
  This message came from outside your organization.
|-------------------------------------------------------------------!

Hello Nat,

there might have been a mess up on our side.

We upgraded in between to slurm 23.02.2 for unrelated reasons without
thinking too much about it, the cluster is not yet in regular use.
However, the release notes contain:
"NOTE: PMIx v1.x is no longer supported."

I do not understand the version naming of PMI(x), so this might be
related or unrelated to the problem I observed. The slurm-libpmi rpm
we built and installed however still contains
/usr/lib64/libpmi.so
/usr/lib64/libpmi.so.0
/usr/lib64/libpmi.so.0.0.0
which I believe is pmi1 as needed by mvapich.


Best Regards

Christof

On Thu, May 18, 2023 at 01:46:35PM +0000, Shineman, Nat wrote:
> Hi Cristof,
> 
> Thanks for reporting this. It looks like what is happening is srun is unable to get your process mapping from the slurm daemon and is doing a fallback method. We've overridden that fallback to support other launchers with PMI1 support and it looks like we did not provide the correct safeties to ensure it still worked with slrum. I should be able to provide you with a patch shortly. In the meantime, yes you can try building with hydra and/or mpirun_rsh by removing the slurm arguments. Both of those launchers have some degree of integration with slurm.
> 
> Thanks,
> Nat
> ________________________________
> From: Mvapich-discuss <mvapich-discuss-bounces at lists.osu.edu> on behalf of christof.koehler--- via Mvapich-discuss <mvapich-discuss at lists.osu.edu>
> Sent: Thursday, May 18, 2023 07:47
> To: mvapich-discuss at lists.osu.edu <mvapich-discuss at lists.osu.edu>
> Subject: [Mvapich-discuss] mvapich 3.0b srun start failure
> 
> Hello everybody,
> 
> I now started to test the mvapich 3.0b build. It was compiled on Rocky
> Linux 9.1 with slurm 23.02.2 and gcc 11.3.1. See at the end of email for
> mpichversion output.
> 
> When I try to start a simple mpi hello world with srun --mpi=pmi2
> I see an error messages concerning PMI and a segfault, see also at the
> end of the email. The same mpi hello world source code using the same
> srun --mpi=pmi2 invocation (but obviously different binaries) works fine
> with mvapich2 2.3.7, mpich 4.1.1 and openmpi 4.1.5.
> 
> Should I try another launcher, e.g. hydra by not setting --with-pm and
> --with-pmi ? Would the hydra launcher be able to communicate wth slurm,
> though ?
> 
> Best Regards
> 
> Christof
> 
> $ mpichversion
> MVAPICH Version:        3.0b
> MVAPICH Release date:   04/10/2023
> MVAPICH Device:         ch4:ofi
> MVAPICH configure:      --with-pm=slurm --with-pmi=pmi1
> --with-device=ch4:ofi --prefix=/cluster/mpi/mvapich2/3.0a/gcc11.3.1
> MVAPICH CC:     gcc    -DNDEBUG -DNVALGRIND -O2
> MVAPICH CXX:    g++   -DNDEBUG -DNVALGRIND -O2
> MVAPICH F77:    gfortran -fallow-argument-mismatch  -O2
> MVAPICH FC:     gfortran   -O2
> MVAPICH Custom Information:     @MVAPICH_CUSTOM_STRING@
> 
> Error Message:
> 
> INTERNAL ERROR: invalid error code 6163 (Ring ids do not match) in
> MPIR_NODEMAP_build_nodemap_fallback:355
> Abort(2141455) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init:
> Other MPI error, error stack:
> MPIR_Init_thread(175)...................:
> MPID_Init(509)..........................:
> MPIR_pmi_init(119)......................:
> build_nodemap(882)......................:
> MPIR_NODEMAP_build_nodemap_fallback(355):
> In: PMI_Abort(2141455, Fatal error in PMPI_Init: Other MPI error, error
> stack:
> MPIR_Init_thread(175)...................:
> MPID_Init(509)..........................:
> MPIR_pmi_init(119)......................:
> build_nodemap(882)......................:
> MPIR_NODEMAP_build_nodemap_fallback(355): )
> INTERNAL ERROR: invalid error code 6106 (Ring ids do not match) in
> MPIR_NODEMAP_build_nodemap_fallback:355
> Abort(2141455) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init:
> Other MPI error, error stack:
> MPIR_Init_thread(175)...................:
> MPID_Init(509)..........................:
> MPIR_pmi_init(119)......................:
> build_nodemap(882)......................:
> MPIR_NODEMAP_build_nodemap_fallback(355):
> In: PMI_Abort(2141455, Fatal error in PMPI_Init: Other MPI error, error
> stack:
> MPIR_Init_thread(175)...................:
> MPID_Init(509)..........................:
> MPIR_pmi_init(119)......................:
> build_nodemap(882)......................:
> MPIR_NODEMAP_build_nodemap_fallback(355): )
> srun: error: gpu001: tasks 0-9: Segmentation fault (core dumped)
> 
> 
> 
> 
> --
> Dr. rer. nat. Christof Köhler       email: c.koehler at uni-bremen.de
> Universitaet Bremen/FB1/BCCMS       phone:  +49-(0)421-218-62334
> Am Fallturm 1/ TAB/ Raum 3.06       fax: +49-(0)421-218-62770
> 28359 Bremen
> _______________________________________________
> Mvapich-discuss mailing list
> Mvapich-discuss at lists.osu.edu
> https://lists.osu.edu/mailman/listinfo/mvapich-discuss

-- 
Dr. rer. nat. Christof Köhler       email: c.koehler at uni-bremen.de
Universitaet Bremen/FB1/BCCMS       phone:  +49-(0)421-218-62334
Am Fallturm 1/ TAB/ Raum 3.06       fax: +49-(0)421-218-62770
28359 Bremen  



More information about the Mvapich-discuss mailing list