[mvapich-discuss] MVAPICH2 1.7 + LiMIC2 0.5.5

Tibor Pausz pausz at th.physik.uni-frankfurt.de
Fri Feb 24 10:52:50 EST 2012


Hello Jonathan

I have run the commands as you suggested. Here are the results

Am 24.02.2012 15:02, schrieb Jonathan Perkins:
> Hello Tibor,
>
> Does this also fail for a simple 2 process benchmark (osu_latency for
> example)?  
srun osu_latency
In: PMI_Abort(1, Fatal error in MPI_Init:
Other MPI error
)
In: PMI_Abort(1, Fatal error in MPI_Init:
Other MPI error
)
LiMIC: (limic_open) file open fail
LiMIC: (limic_open) file open fail
slurmd[node8-094]: *** STEP 10071.6 KILLED AT 2012-02-24T16:50:09 WITH
SIGNAL 9 ***
srun: Job step aborted: Waiting up to 2 seconds for job step to finish.
slurmd[node8-094]: *** STEP 10071.6 KILLED AT 2012-02-24T16:50:09 WITH
SIGNAL 9 ***
slurmd[node8-094]: *** STEP 10071.6 KILLED AT 2012-02-24T16:50:09 WITH
SIGNAL 9 ***
srun: error: node8-094: tasks 0-1: Exited with exit code 1



> Can you provide the output of the following commands on one
> of the allocated nodes?
>
> mpiname -a

MVAPICH2 1.7 Thu Oct 13 17:31:44 EDT 2011 ch3:mrail

Compilation
CC: icc -I/cm/shared/apps/slurm/2.3.3/include   -DNDEBUG -DNVALGRIND -O2
CXX: icpc   -DNDEBUG -DNVALGRIND -O2
F77: ifort   -O2
FC: ifort   -O2

Configuration
--prefix=/home/dftfunc/pausz/soft/mvapich2-1.7-limic
--with-device=ch3:mrail --with-rdma=gen2 --enable-shared --enable-xrc
--with-pm=no --with-pmi=slurm --with-limic2=/home/dftfunc/pausz/soft/limic2
...
> /sbin/lsmod | grep limic
limic                   5061  0


Tibor


More information about the mvapich-discuss mailing list