[mvapich-discuss] MVAPICH2 1.7 + LiMIC2 0.5.5

Karl Schulz karl at tacc.utexas.edu
Fri Feb 24 10:24:51 EST 2012


One other possible thing to check is to see if the limic device is created correctly when the kernel module is loaded. If you don't have a /dev/limic after loading the module, that can be a problem (which you can resolve with mknod).

Karl


On Feb 24, 2012, at 8:02 AM, Jonathan Perkins wrote:

> On Fri, Feb 24, 2012 at 10:35:53AM +0100, Tibor Pausz wrote:
>> Hi all,
>> 
>> I am trying to use this combination on a cluster with slurm 2.3.3, but I
>> get errors and the MPI applications do not start.
>> 
>> In the error log file:
>> PMI_Abort(1, Fatal error in MPI_Init:
>> Other MPI error
>> )
>> 
>> and in the output log:
>> LiMIC: (limic_open) file open fail
>> 
>> I configured, compiled and installed the LiMIC2 Module from the MVAPICH2
>> 1.7 distribution, the kernel module is loaded on all nodes.
>> 
>> Greetings,
>> Tibor
> 
> Hello Tibor,
> 
> Does this also fail for a simple 2 process benchmark (osu_latency for
> example)?  Can you provide the output of the following commands on one
> of the allocated nodes?
> 
> mpiname -a
> /sbin/lsmod | grep limic
> 
> The error message from LiMIC is coming from the library and not the
> module so I just want to double check things here.
> 
> -- 
> Jonathan Perkins
> http://www.cse.ohio-state.edu/~perkinjo
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss




More information about the mvapich-discuss mailing list