[mvapich-discuss] need help:my install are segmentation fault

Yi Huizhan yihz at paratera.com
Thu Jul 25 22:50:14 EDT 2019


I have install mvapich 2.3.1 /2.2 with intel 2019, but the mpi compiled imb reports sementation fault.
my conf
../mvapich2-2.3.1/configure --with-device=ch3:mrail --with-rdma=gen2 -prefix=/public1/soft/mvapich2/2.3.1-pmi1 --with-pmi=pmi1 --with-pm=slurm --disable-umad --with-ib-include=/usr/include/infiniband --with-ib-libpath=/usr/lib64 --enable-threads=single --enable-cxx --enable-f77 --enable-fc --enable-romio --with-ch3-rank-bits=32 --without-mpe --without-hwloc --disable-rdma-cm --disable-mcast --with-atomic-primitives=no --disable-fuse --enable-registration-cache --enable-smpcoll CC=icc CXX=icpc FC=ifort F77=ifort


the OFED version 
4.6-1.0.1.1


[deploy at ln2%cstc9 build-mvapich2-2.3.1-pmi]$ ibstat
CA 'mlx4_0'
        CA type: MT4099
        Number of ports: 1
        Firmware version: 2.42.5000
        Hardware version: 1
        Node GUID: 0xf4521403000ba410
        System image GUID: 0xf4521403000ba413
        Port 1:
                State: Active
                Physical state: LinkUp
                Rate: 56
                Base lid: 18
                LMC: 0
                SM lid: 1
                Capability mask: 0x02514868
                Port GUID: 0xf4521403000ba411
                Link layer: InfiniBand
[deploy at ln2%cstc9 build-mvapich2-2.3.1-pmi]$ ibstatus
Infiniband device 'mlx4_0' port 1 status:
        default gid:     fe80:0000:0000:0000:f452:1403:000b:a411
        base lid:        0x12
        sm lid:          0x1
        state:           4: ACTIVE
        phys state:      5: LinkUp
        rate:            56 Gb/sec (4X FDR)
        link_layer:      InfiniBand


OS:
CentOS Linux release 7.6.1810 (Core)


when I submited a task compiled by the mpi, it just give sementation fault.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20190726/8e7378e3/attachment.html>


More information about the mvapich-discuss mailing list