[mvapich-discuss] Fail to run MPI program using MVAPICH2-1.5.1

Jonathan Perkins perkinjo at cse.ohio-state.edu
Thu Sep 30 13:59:07 EDT 2010


On Wed, Sep 29, 2010 at 9:04 PM, Ting-jen Yen <yentj at infowrap.com.tw> wrote:
>  Thanks.  I did get the backtrace of the mpi processes.
> When I ran a simple hello-world MPI program with 2 processes, both
> backtraces are almost identical as following: (only argv in main()
> differs, so I copy only one of these.)
>
> ---------------------------------------------
> Thread 1 (Thread 0x2ab6f1524660 (LWP 19810)):
> #0  0x0000003326a0d590 in __read_nocancel () from /lib64/libpthread.so.0
> #1  0x00000000004a83ec in PMIU_readline ()
> #2  0x0000000000439fdc in PMI_KVS_Get ()
> #3  0x000000000041c1f6 in MPIDI_Populate_vc_node_ids ()
> #4  0x000000000041adbd in MPID_Init ()
> #5  0x000000000040c152 in MPIR_Init_thread ()
> #6  0x000000000040b2b0 in PMPI_Init ()
> #7  0x00000000004048e9 in main (argc=1, argv=0x7fff7a65ad48) at
> hello.c:15
> ------------------------------------------

I'm not really sure what is happening here and the backtrace is
missing some information.  Can you rebuild the mvapich2 library with
the --enable-dbg=g option included as well.  Also, make sure that you
rebuild the mpi benchmark with the new library as well (you're using
static libraries).

-- 
Jonathan Perkins
http://www.cse.ohio-state.edu/~perkinjo



More information about the mvapich-discuss mailing list