[mvapich-discuss] mvapich2 and osu benchmarks using Intel NetEffects NIC

Hoot Thompson hoot at ptpnow.com
Thu Apr 30 14:17:19 EDT 2009


thanks!

  _____  

From: gossips J [mailto:polk678 at gmail.com] 
Sent: Thursday, April 30, 2009 1:18 PM
To: Hoot Thompson
Cc: Dhabaleswar Panda; mvapich-discuss at cse.ohio-state.edu
Subject: Re: [mvapich-discuss] mvapich2 and osu benchmarks using Intel
NetEffects NIC


This error seems to be specific to verbs api implementation of the
providers. It looks me to QP has got error while moving to RTS state, reason
could be bad card type/FW combination or bad set of drivers.
 
I would suggest to go through the providers code and see if there is any
latest version out...
 
As DK mentioned, intel-neteffect cards has gone through many changes so you
should see the latest drivers release made by intel and check with it.
 
hopefuly problem could be solved.

-polk.
 
On 4/30/09, Hoot Thompson <hoot at ptpnow.com> wrote: 

Thanks for quick response.  Can you tell me what the error message means?

Hoot

-----Original Message-----
From: Dhabaleswar Panda [mailto:panda at cse.ohio-state.edu]
Sent: Thursday, April 30, 2009 9:16 AM
To: Hoot Thompson
Cc: mvapich-discuss at cse.ohio-state.edu
Subject: Re: [mvapich-discuss] mvapich2 and osu benchmarks using Intel
NetEffects NIC

We had tested MVAPICH2 and Neteffect (original cards) long back. They were
working fine. As you know, Neteffect cards and drivers have gone through
multiple changes in recent years, especially after Intel's aquisition of
Neteffect. You may check with Intel about the latest status and drivers on
these cards.

Thanks,

DK

On Thu, 30 Apr 2009, Hoot Thompson wrote:

> Has there been any work done and/or experience with using mvapich2 and
> Intel NetEffects 10Gigbit NIC cards as the communication fabric?  If
> so, any setup/configuration suggestions for a Linux environment would
> be appreciated.  No matter what I try when I try to execute one of the
> OSU benchmarks I get the following error...
>
>
> client1nccs:~/mvapich2-1.2p1/osu_benchmarks # mpiexec -n 2
> ./osu_latency
> 0: Starting MPI
> 0: [ring_startup.c:301] error(22): Could not modify boot qp to RTS
> 1: [ring_startup.c:301] error(22): Could not modify boot qp to RTS
> rank 1 in job 4  client1nccs_37672   caused collective abort of all ranks
>   exit status of rank 1: killed by signal 9
> rank 0 in job 4  client1nccs_37672   caused collective abort of all ranks
>   exit status of rank 0: killed by signal 9
>
>
>
> Thanks in advance.....
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>




_______________________________________________
mvapich-discuss mailing list
mvapich-discuss at cse.ohio-state.edu
http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss



-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20090430/53d2b8f0/attachment.html


More information about the mvapich-discuss mailing list