[mvapich-discuss] where can I find similar env setting on mvapich as these three: MPI_COMM_MAX, MPI_TYPE_MAX and MPI_GROUP_MAX

Weikuan Yu weikuan.yu at gmail.com
Thu Jan 31 18:05:50 EST 2008


Hi, Terrence,

Thanks for the answers.

With the large data volume and compelling application, I got more 
curious for further information.

Here are some comments I have.
1) Though the data volume is big, the program dies at file_open. This 
means that the larger buffer size, communicators, types are not yet 
needed. So without three parameters to increase communicator/type/buffer 
sizes, I would presume it should be safer.

2) Have you configured Panasas support when using MVAPICH? If so, have 
you seen any error output from the program? Could you please post here? 
Better if you can provide a core dump or stack trace.

3) Interesting to know that the problem also happened to SGI MPI, and 
increasing three parameters has solved the problem. Is the problem 
really the same for both SGI MPI and MVAPICH?

In case it is possible for you to share the I/O kernel of your program, 
that would be very good.

--Weikuan

Terrence LIAO wrote:
> Hi, WeiKuan,
> 
> 1) What does your MPI code do? how does it die?
>    This is finite difference 3D wave equation solver uses in seismic 
> depth imaging processing and due to its large file I/O, input file size 
> in 1~5TB range and output intermediate file in 1~10GB ranges.  MPI-IO is 
> used.  The code dies with something like "MPI process abnormal exit.." 
> right after it call the MPI_file_open().
> 
> 2) What system you are running with? What file system you are using?
>     The cluster is AMD Opteron Dual-core with IB and Panasas.  Using 
> PGI7.1 and mvapich 1.0 beta.
> 
> 3) What are the three parameters for? How did they solve your problem?
>     You can find more info on googling those 3 parameters.  They have 
> effect on how cached (or buffer) memory been used.  We think it die on 
> MPI_file_open when it is trying to allocate buffer memory.  Those 3 
> parameters increase the buffer size.
> 
> Thank you very much.
> 
> -- Terrence
> --------------------------------------------------------
> Terrence Liao
> TOTAL E&P RESEARCH & TECHNOLOGY USA, LLC
> Email: terrence.liao at total.com <mailto:terrence.liao at total.com>
> 


More information about the mvapich-discuss mailing list