[mvapich-discuss] Is negative host_id number from gethostbyname() a problem?

Dhabaleswar Panda panda at cse.ohio-state.edu
Thu Oct 30 21:45:31 EDT 2008


> Yes,  we are aware of the Qlogic driver on PSM and your make.mvapich.psm .

Thanks.

>  We choose GEN2 for two reasons:
> 1. The OFED stack seems work fine with our luster system.  We have a tough
> time to make our lustre to talk to cluster via IB, with this OFED GEN2, we
> can do it.
> 2. Our infinipath card is old (or ancient!) about 3 years old, before
> Qlogic bought infinipath.  I am not sure PSM will work on it.

Thanks for letting us know your objectives.

We have a similar older setup here (in addition to the newer cards and
setups). PSM and MVAPICH-PSM work on these cards.

> Our application (seismic apps solving wave equations) is IO intensive, we
> believe we can scarify the MPI performance a little and potentially gain
> more from IO performance with lustre on IB. Of course, if PSM works on the
> lustre driver too, this will be a perfect solution.

I am not sure whether PSM works with Lustre or not. Might be somebody from
QLogic can answer this.

Thanks,

DK

> Thank you very much for your many help.
>
> -- Terrence
>
>
>
>
>
>
> Dhabaleswar Panda <panda at cse.ohio-state.edu>
> 10/30/2008 02:09 PM
>
> To
> Terrence.LIAO at total.com
> cc
> mvapich-discuss at cse.ohio-state.edu, Brian Stevens <brian at stevens.com>,
> <John.WANG at total.com>
> Subject
> Re: [mvapich-discuss] Is negative host_id number from gethostbyname() a
> problem?
>
>
>
>
>
>
> Hi Terrence,
>
> Thanks for your report. We are taking a look at it. It looks like you are
> using make.mvapich.gen2 to run your applications on systems with
> infinipath HTX cards. Not sure whether you are aware of a different
> interface in MVAPICH 1.1 which is specifically designed for infinipath
> cards. This design is on top of the PSM layer provided by QLogic and
> delivers better performance. You need to use make.mvapich.psm script for
> this. More details are available in mvapich 1.1 user guide from the
> following URL:
>
> http://mvapich.cse.ohio-state.edu/support/mvapich_user_guide-1.1.html#x1-120004.4.3
>
>
> Thanks,
>
> DK
>
>
> On Thu, 30 Oct 2008 Terrence.LIAO at total.com wrote:
>
> > Dear Mvapich,
> >
> > I am building mvapich-1.1rc1 for infinipath HTX card with CentOS5.2 and
> > OFED1.4 using make.mavpich.gen2.  The build is successfully, but the run
> > fails,  it never passes the MPI_Init().  It seems fall into an infinite
> > loop.   I check the gethostbyname() using this code
> >                  struct hostent *he;
> >                  he = gethostbyname (myname);
> >                  int host_id = ((struct in_addr *)
> > he->h_addr_list[0])->s_addr;
> >                  printf("host_id: %d\n", host_id);
> >
> > and get
> >                 host_id: -486364992
> >
> > Will this the root cause, and any advice what I should do to fix this
> > problem?
> >
> >
> > Thank you very much.
> >
> > -- Terrence
> > --------------------------------------------------------
> > Terrence Liao, Ph.D.
> > Research Computer Scientist
> > TOTAL E&P RESEARCH & TECHNOLOGY USA, LLC
> > 1201 Louisiana, Suite 1800, Houston, TX 77002
> > Tel: 713.647.3498  Fax: 713.647.3638
> > Email: terrence.liao at total.com
> >
> >
> >
>
>



More information about the mvapich-discuss mailing list