[mvapich-discuss] mvapich2 and osu benchmarks using Intel NetEffects NIC

Matthew Koop koop at cse.ohio-state.edu
Thu Apr 30 14:12:56 EDT 2009


Hi Hoot,

I think the issue here is that the card is not being detected as an iWARP
device, so it is taking the InfiniBand wireup protocol path.

You'll want to consult section 5.2.5 of the user guide for more
information on using an iWARP device in MVAPICH2:

http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-1.2.html

That said, I've been told there are still other stability problems with
some of the RDMA CM connection setup with the NetEffect driver with
anything less than the latest release of OFED. At least for now,
the best method may be to use the uDAPL driver for NetEffect.

Matt

On Thu, 30 Apr 2009, gossips J wrote:

> This error seems to be specific to verbs api implementation of the
> providers. It looks me to QP has got error while moving to RTS state, reason
> could be bad card type/FW combination or bad set of drivers.
>
> I would suggest to go through the providers code and see if there is any
> latest version out...
>
> As DK mentioned, intel-neteffect cards has gone through many changes so you
> should see the latest drivers release made by intel and check with it.
>
> hopefuly problem could be solved.
>
> -polk.
>
> On 4/30/09, Hoot Thompson <hoot at ptpnow.com> wrote:
> >
> > Thanks for quick response.  Can you tell me what the error message means?
> >
> > Hoot
> >
> > -----Original Message-----
> > From: Dhabaleswar Panda [mailto:panda at cse.ohio-state.edu]
> > Sent: Thursday, April 30, 2009 9:16 AM
> > To: Hoot Thompson
> > Cc: mvapich-discuss at cse.ohio-state.edu
> > Subject: Re: [mvapich-discuss] mvapich2 and osu benchmarks using Intel
> > NetEffects NIC
> >
> > We had tested MVAPICH2 and Neteffect (original cards) long back. They were
> > working fine. As you know, Neteffect cards and drivers have gone through
> > multiple changes in recent years, especially after Intel's aquisition of
> > Neteffect. You may check with Intel about the latest status and drivers on
> > these cards.
> >
> > Thanks,
> >
> > DK
> >
> > On Thu, 30 Apr 2009, Hoot Thompson wrote:
> >
> > > Has there been any work done and/or experience with using mvapich2 and
> > > Intel NetEffects 10Gigbit NIC cards as the communication fabric?  If
> > > so, any setup/configuration suggestions for a Linux environment would
> > > be appreciated.  No matter what I try when I try to execute one of the
> > > OSU benchmarks I get the following error...
> > >
> > >
> > > client1nccs:~/mvapich2-1.2p1/osu_benchmarks # mpiexec -n 2
> > > ./osu_latency
> > > 0: Starting MPI
> > > 0: [ring_startup.c:301] error(22): Could not modify boot qp to RTS
> > > 1: [ring_startup.c:301] error(22): Could not modify boot qp to RTS
> > > rank 1 in job 4  client1nccs_37672   caused collective abort of all ranks
> > >   exit status of rank 1: killed by signal 9
> > > rank 0 in job 4  client1nccs_37672   caused collective abort of all ranks
> > >   exit status of rank 0: killed by signal 9
> > >
> > >
> > >
> > > Thanks in advance.....
> > >
> > >
> > > _______________________________________________
> > > mvapich-discuss mailing list
> > > mvapich-discuss at cse.ohio-state.edu
> > > http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> > >
> >
> >
> >
> >
> > _______________________________________________
> > mvapich-discuss mailing list
> > mvapich-discuss at cse.ohio-state.edu
> > http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> >
>




More information about the mvapich-discuss mailing list