[mvapich-discuss] build failure with --enable-xrc=yes

Hari Subramoni subramon at cse.ohio-state.edu
Wed Mar 3 12:16:43 EST 2010


Hi Tommi,

Glad to know that you got things to work on your system.

One thing I would like to mention is that XRC is intended reduce the
memory footprint of the MPI library when scale the application up to
thousands of processes. It is not meant to improve the performance in
terms of latency or bandwidth. This performance will still be comparable
to that obtained by the regular RC queue pair.

Tuning environment variables (like what you are currently doing -
setting MV2_USE_RDMA_FAST_PATH=1) and other parameters would be the best
way to get performance for small scale jobs or benchmarks.

Thx,
Hari.

On Thu, 25 Feb 2010, Tommi T wrote:

> --- On Tue, 2/23/10, Hari Subramoni <subramon at cse.ohio-state.edu> wrote:
> > The error code indicates that the version of OFED you have
> > installed on your system does not have support for XRC.
> >
> > Which version of OFED are you using? You can use
> > 'ofed_info' command to find this out.
>
> Thanks for your suggestions, I managed to go further.
>
> We're using OFED provided by RHEL 5.4, version is AFAIK 1.4.1-rc3.
>
> I fetched OFED 1.5.1-RC1 and installed it, it solved my --enable-xrc=yes build problem. But we still got an awful latency results from the linpack test. Seems that MV2_USE_RDMA_FAST_PATH=0 is crusial variable to get resonable latency values from the hpcc benchmark. MV2_USE_XRC=1 was not the variable which I put my bets :-/
>
> BR,
>
> Tommi
> BR, Tommi
>
>
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>





More information about the mvapich-discuss mailing list