[mvapich-discuss] errors when using MVAPICH on P4
Cluster@OSC
YANG YUAN
yuan.65 at osu.edu
Wed Jun 20 10:56:06 EDT 2007
Hi Abhinav,
Thanks for your reply. I tried a dummy program, and it works fine with both mpicc and icpc. The only reason i am using icpc is that a shared open source library i am using now can not be compiled and linked together with my MPI program when using mpicc or mpicxx. I am suspecting it is the major problem that cause the failure of my program. but i don't know how to fix it.
Another question, bear with me if it is too simple. I am not sure whether I have to make local copies of the shared library i am using or not. Since it is distributed system, i don't think it is accessible to all processors. Does MPI automatically do the copying?
Again, appreciated!
yang
----- Original Message -----
From: Abhinav Vishnu <vishnu at cse.ohio-state.edu>
Date: Wednesday, June 20, 2007 10:43 am
Subject: Re: [mvapich-discuss] errors when using MVAPICH on P4 Cluster at OSC
> Hi,
>
> Thanks for using MVAPICH and reporting the problem to us.
> >
> > Hi there,
> >
> > I am a new user of MVAPICH, currently i am working on a parallel
> application using MPI on P4 cluster at OSC. the following error is
> what i've experienced after successfully compiling and linking.
> >
> > /***
> > [0] Abort: Error: EVAPI_list_hcas: Error in an underlying O/S call
> > at line 258 in file viaparam.c
> > mpiexec: Error: read_full: EOF, only 0 of 4 bytes.
> > ***/
> >
> > Could you please tell me how to get rid of this error? Many thanks.
> >
> > Yang
> >
> > p.s. I couldn't use mpicc, mpiCC or mpicxx to compile my code,
> so currently, the code is compiled and linked using icpc together
> with $MPI_LIBS and $MPI_CFLAGS.
> >
>
> This is strange. I would not expect to be able to run an MPI
> applicationwithout compiling with mpicc in general.
>
> Can you possibly use a dummy program which calls MPI_Init and
> follows it
> with MPI_Finalize and no other communication and computation. If
> you are
> not able to compile this program, then i think it would be the
> best to
> contact your system administrator regarding the MPI installation
> and its
> usage.
>
> Thanks much,
>
> :- Abhinav
>
> >
> > *************************************
> > Yang Yuan
> > Ph.D. student in Operations Research
> > IWSE, The Ohio State University
> > 210 Baker Systems
> > 1971 Neil Ave
> > Columbus, OH 43210
> >
> >
> >
> >
> > _______________________________________________
> > mvapich-discuss mailing list
> > mvapich-discuss at cse.ohio-state.edu
> > http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
More information about the mvapich-discuss
mailing list