[mvapich-discuss] Error: read_ib_one: mixed version executables (5 and 8), no hope

Steve Jones stevejones at stanford.edu
Mon Oct 20 15:39:06 EDT 2008


Hi Joseph.

I'm assuming this is your Rocks Cluster. The version of MPIEXEC  
(0.82<=) you're using doesn't support MVAPICH 1.1. I'm not sure of the  
release date for the newer version of MPIEXEC 0.84 from OSC, though  
the web site states "coming soon". I do know I've built MPIEXEC from  
their SVN repo and it's working with MVAPICH 1.0.1, so you might try  
using it. Here are the basics to check out the source and build on  
your cluster:

If you don't have svn:

  # yum install subversion

Once you have the above installed:

  # cd /share/apps
  # svn co http://svn.osc.edu/repos/mpiexec/trunk mpiexec
  # cd mpiexec
  # ./configure --prefix=/share/apps/mpiexec --with-default-comm=mpich-ib
  # make ; make install

You can now run it using absolute path of  
/share/apps/mpiexec/bin/mpiexec or add it to your logon scripts for  
frontend and compute nodes. You can post to the rocks list for  
customizations on the cluster.

Steve

Quoting Joseph Hargitai <joseph.hargitai at nyu.edu>:

>  Hi all:
>
> this error message was posted a while ago but did not have a resolution.
>
>
>
>  /share/apps/nyu/mpi/intel/mvapich-1.1rc1/bin/mpirun_rsh -np 12   
> -hostfile $PBS_NODEFILE ./new-f
>
> works
>
> same with mpiexec errs:
>
>  /opt/mpiexec/bin/mpiexec -comm ib -np 12 ./new-f
>
> mpiexec: Warning: read_ib_one: protocol version 8 not known, but   
> might still work.
> mpiexec: Error: read_ib_one: mixed version executables (5 and 8), no hope.
>
>
> libraries loaded as such:
>
>  ldd new-f
>         libmpich.so.1.0 =>   
> /share/apps/nyu/mpi/intel/mvapich-1.1rc1/lib/shared/libmpich.so.1.0   
> (0x0000002a95557000)
>         libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x0000002a95769000)
>         libibumad.so.1 => /usr/lib64/libibumad.so.1 (0x0000002a95874000)
>         libpthread.so.0 => /lib64/tls/libpthread.so.0 (0x0000003acf100000)
>         librt.so.1 => /lib64/tls/librt.so.1 (0x0000003acf900000)
>         libm.so.6 => /lib64/tls/libm.so.6 (0x0000003acef00000)
>         libc.so.6 => /lib64/tls/libc.so.6 (0x0000003acea00000)
>         libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x0000003acf700000)
>         libdl.so.2 => /lib64/libdl.so.2 (0x0000003aced00000)
>         libimf.so => /usr/local/intel/fce/10.0.023/lib/libimf.so   
> (0x0000002a9599e000)
>         libsvml.so => /usr/local/intel/fce/10.0.023/lib/libsvml.so   
> (0x0000002a95d00000)
>         libintlc.so.5 =>   
> /usr/local/intel/fce/10.0.023/lib/libintlc.so.5 (0x0000002a95e81000)
>         libibcommon.so.1 => /usr/lib64/libibcommon.so.1 (0x0000002a95fba000)
>         /lib64/ld-linux-x86-64.so.2 (0x0000003ace600000)
>
>
> --
> mpiexec can run version 0.9x libraries
> mpirun_rsh 1.1rc1 version can run with either libraries loaded
> mpirun_rsh 0.9 version can run only with 0.9x version of libraries.
>
>
> ---
>
> here is how mpirun_rsh 1.1rc was compiled:
>
> [jh2 at compute-0-1 m-test]$   
> /share/apps/nyu/mpi/intel/mvapich-1.1rc1/bin/mpichversion
> MPICH Version:          1.2.7
> MPICH Release date:     $Date: 2005/06/22 16:33:49$
> MPICH Patches applied:  none
> MPICH configure:        --with-device=ch_gen2 --with-arch=LINUX   
> -prefix=/share/apps/nyu/mpi/intel/mvapich-1.1rc1 --with-romio   
> --enable-cxx --enable-sharedlib --enable-debug --enable-f77   
> --enable-f90 --without-mpe -lib=-L/usr/lib64 -Wl,-rpath=/usr/lib64   
> -libverbs -libumad -lpthread
> MPICH Device:           ch_gen2
>
> here is the config file:
>
> #!/bin/bash
>
> # Most variables here can be overridden by exporting them in the environment
> # before running this script.  Default values have been provided if the
> # environment variable is not already set.
>
> source ./make.mvapich.def
>
> # The target architecture.  If not exported outside of this script,
> # it will be found automatically or prompted for if necessary.
> # Supported: "_IA32_", "_IA64_", "_EM64T_", "_X86_64_"
> #
> if [ -z "$ARCH" ]; then
>     arch
> fi
>
> # Mandatory variables.  All are checked except CXX and F90.
> IBHOME=${IBHOME:-/usr/include/infiniband}
> IBHOME_LIB=${IBHOME_LIB:-/usr/lib64}
> PREFIX=${PREFIX:-/share/apps/nyu/mpi/intel/mvapich-1.1rc1}
> export CC=${CC:-/usr/local/intel/cce/10.0.023/bin/icc}
> export CXX=${CXX:-/usr/local/intel/cce/10.0.023/bin/icpc}
> export F77=${F77:-/usr/local/intel/fce/10.0.023/bin/ifort}
> export F90=${F90:-/usr/local/intel/fce/10.0.023/bin/ifort}
>
> if [ $ARCH = "SOLARIS" ]; then
>     die_setup "MVAPICH GEN2 is not supported on Solaris."
> elif [ $ARCH = "MAC_OSX" ]; then
>     die_setup "MVAPICH GEN2 is not supported on MacOS."
> fi
>
> #
> # Compiler specific flags. If you are using
> # ICC on IA64 platform, please set COMPILER_FLAG
> # to "icc"
> #
>
> COMPILER_FLAG=${COMPILER_FLAG:-}
>
> if [ "$COMPILER_FLAG" == "icc" ]; then
>        COMPILER_FLAG="-D_ICC_"
> else
>        COMPILER_FLAG=""
> fi
>
> # Check mandatory variable settings.
> if [ -z "$IBHOME" ] || [ -z "$PREFIX" ] || [ -z "$CC" ] || [ -z   
> "$F77" ]; then
>     die_setup "Please set mandatory variables in this script."
> elif [ ! -d $IBHOME ]; then
>     die_setup "IBHOME directory $IBHOME does not exist."
> fi
>
> # Optional variables.
> #
>
> # Whether to enable ROMIO support.  This is necessary if building the
> # F90 modules.
> if [ -n "$F90" ]; then
>     ROMIO="--with-romio"
> else
>     ROMIO=${ROMIO:---without-romio}
> fi
>
> # PTMALLOC support for MVAPICH2 memory hooks.  Enabling this will allow
> # MVAPICH2 to release memory to the Operating System (when registration
> # cache is enabled).  Enabled by default.  Disable with "no".
> PTMALLOC=${PTMALLOC:-}
>
> if [ "$PTMALLOC" = "no" ]; then
>         PTMALLOC="-DDISABLE_PTMALLOC"
> else
>         PTMALLOC=""
> fi
>
> # Set this to override automatic optimization setting (-03).
> OPT_FLAG=${OPT_FLAG:--O3}
>
> if [ -n "$PROCESSOR" ]; then
>        PROCESSOR=-D${PROCESSOR}
> else
>        PROCESSOR=
> fi
>
> export LIBS=${LIBS:--L${IBHOME_LIB} -Wl,-rpath=${IBHOME_LIB}   
> -libverbs -libumad
> -lpthread}
> export FFLAGS=${FFLAGS:--L${IBHOME_LIB}}
> export CFLAGS=${CFLAGS:--D${ARCH} ${PROCESSOR} ${PTMALLOC}   
> -DEARLY_SEND_COMPLETI
> ON -DMEMORY_SCALE -DVIADEV_RPUT_SUPPORT -D_SMP_ -D_SMP_RNDV_    
> -DCH_GEN2 -D_GNU_S
> OURCE ${COMPILER_FLAG}  -I${IBHOME}/include $OPT_FLAG}
>
> export MPIRUN_CFLAGS="${MPIRUN_CFLAGS}   
> -DLD_LIBRARY_PATH_MPI=\\\"${PREFIX}/lib/s
> hared\\\" -DMPI_PREFIX=\\\"${PREFIX}/\\\"   
> -DPARAM_GLOBAL=\\\"${PREFIX}/etc/mvapi
> ch.conf\\\""
>
> # Prelogue
> make distclean &>/dev/null
> set -o pipefail
>
> # Configure MVAPICH
>
> echo "Configuring MVAPICH..."
>
> ./configure --with-device=ch_gen2 --with-arch=LINUX -prefix=${PREFIX} \
>         $ROMIO --enable-cxx --enable-sharedlib --enable-debug   
> --enable-f77 --ena
> ble-f90 --without-mpe -lib="$LIBS" 2>&1 |tee config-mine.log
> ret=$?
> test $ret = 0 ||  die "configuration."
>
> # Build MVAPICH
> echo "Building MVAPICH..."
> make 2>&1 |tee make-mine.log
> ret=$?
> test $ret = 0 ||  die "building MVAPICH."
>
> # Install MVAPICH
> echo "MVAPICH installation..."
> rm -f install-mine.log
> make install 2>&1 |tee install-mine.log
> ret=$?
> test $ret = 0 ||  die "installing MVAPICH."
>
> ---------------
> mpiexec version and config  - maybe a low version?
>
>  /opt/mpiexec/bin/mpiexec -version
> Version 0.82, configure options: '--build=x86_64-redhat-linux-gnu'   
> '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu'   
> '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'   
> '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'   
> '--datadir=/usr/share' '--includedir=/usr/include'   
> '--libdir=/usr/lib64' '--libexecdir=/usr/libexec'   
> '--localstatedir=/var' '--sharedstatedir=/usr/com'   
> '--mandir=/usr/share/man' '--infodir=/usr/share/info'   
> '--prefix=/opt/mpiexec' '--with-pbs=/opt/torque'   
> '--with-default-comm=mpich-p4' 'CFLAGS=-O2 -g -pipe -m64'   
> 'build_alias=x86_64-redhat-linux-gnu'   
> 'host_alias=x86_64-redhat-linux-gnu'   
> 'target_alias=x86_64-redhat-linux-gnu'
>
>
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>




More information about the mvapich-discuss mailing list