[mvapich-discuss] core dump on mpi_init with ofed 2

David Minor david-m at orbotech.com
Tue Jul 31 02:41:08 EDT 2007


Hi Wei,
I'm using 1.0 beta, but the same problem is with 0.9.8, both p3 and the version that comes with ofed 1.2 release. I compiled using the make.mvapich2.ofa option. I haven't specified any environment variables. I didn't change any of the scripts, except to set PREFIX before compiling. I reproduced the problem with a trivial program running on 2 nodes. I didn't see the problem running on ethernet on 0.9.8. 
Thanks,
David

-----Original Message-----
From: wei huang [mailto:huanwei at cse.ohio-state.edu] 
Sent: Monday, July 30, 2007 4:36 PM
To: David Minor
Cc: mvapich-discuss at cse.ohio-state.edu
Subject: Re: [mvapich-discuss] core dump on mpi_init with ofed 2

Hi David,

Thanks for letting us know the problem. Could you please tell us more
information so we can look into this problem?

1) Which version of mvapich2 you are using? Is it 0.9.8? The latest
version for 0.9.8 is mvapich2-0.9.8p3. Also, we have just released
mvapich2-1.0-beta. You are welcomed to try these two version and let us
know if your problem is reproducible there.

2) Are you using native ib verbs or udapl?

3) Have you specify any environmental variables?

4) Did you use our default compiling scripts? Or did you make any changes
to the scripts?

5) On how many processes do you see the problem? How many processes per
physical node?

Thanks.

Regards,
Wei Huang

774 Dreese Lab, 2015 Neil Ave,
Dept. of Computer Science and Engineering
Ohio State University
OH 43210
Tel: (614)292-8501


> >
> > #0  0xb7d5aaf8 in MPIDI_CH3I_MRAILI_Cq_poll ()
> >
> >    from
> > /home/david-m/devmpi/bin/linux/CentOS/g++4.1.1/optimized/libMPIServices.
> > so
> >
> > #1  0xb7d44123 in MPIDI_CH3I_read_progress ()
> >
> >    from
> > /home/david-m/devmpi/bin/linux/CentOS/g++4.1.1/optimized/libMPIServices.
> > so
> >
> > #2  0xb7d43d4a in MPIDI_CH3I_Progress ()
> >
> >    from
> > /home/david-m/devmpi/bin/linux/CentOS/g++4.1.1/optimized/libMPIServices.
> > so
> >
> > #3  0xb7d050ca in MPIC_Wait ()
> >
> >    from
> > /home/david-m/devmpi/bin/linux/CentOS/g++4.1.1/optimized/libMPIServices.
> > so
> >
> > #4  0xb7d05460 in MPIC_Sendrecv ()
> >
> >    from
> > /home/david-m/devmpi/bin/linux/CentOS/g++4.1.1/optimized/libMPIServices.
> > so
> >
> > #5  0xb7cfe634 in MPIR_Allgather ()
> >
> >    from
> > /home/david-m/devmpi/bin/linux/CentOS/g++4.1.1/optimized/libMPIServices.
> > so
> >
> > #6  0xb7cff305 in PMPI_Allgather ()
> >
> >    from
> > /home/david-m/devmpi/bin/linux/CentOS/g++4.1.1/optimized/libMPIServices.
> > so
> >
> > #7  0xb7d0ffd8 in create_2level_comm ()
> >
> >    from
> > /home/david-m/devmpi/bin/linux/CentOS/g++4.1.1/optimized/libMPIServices.
> > so
> >
> > #8  0xb7d21bd4 in PMPI_Init ()
> >
> >    from
> > /home/david-m/devmpi/bin/linux/CentOS/g++4.1.1/optimized/libMPIServices.
> > so
> >
> > #9  0xb7cdff08 in MPI::Init ()
> >
> >    from
> > /home/david-m/devmpi/bin/linux/CentOS/g++4.1.1/optimized/libMPIServices.
> > so
> >
> > #10 0x0804978a in main (argc=3D-1081728512, argv=3D0x400)
> >
> >     at /home/david-m/devmpi/grape/mpi/test/BandwidthSingleThread.cpp:63
> >
> > =20
> >
> > -David Minor
> >
> > Orbotech
> >
> > =20
> >
> > =20
> >
> >



More information about the mvapich-discuss mailing list