[mvapich-discuss] Bandwidth on single hca dual port multirail configuration

Dhabaleswar Panda panda at cse.ohio-state.edu
Wed Mar 4 19:11:47 EST 2009


I think you posted similar questions on other mailing lists and you got
some answers. You need to examine multiple things to see what is happening
on your system.

- What is the speed of your ConnectX card - SDR or DDR?

- What is your platform (Intel or AMD)? What is the memory bandwidth
  available on this platform? Can it support two parallel streams of
  SDR or DDR IB communication?

- Which version of MVAPICH2 you are using? Which interface of MVAPICH2 you
  are using - OpenFabrics-IB or uDAPL. OpenFabrics-IB interface supports
  multi-rail option and you should be able to use multiple ports or
  adapaters. The uDAPL interface only supports single port/adapter.

- How much performance you get if you use one port? Do the numbers differ
  when you use one port vs. another port.

- You seem to be using OSU Put bandwidth test. This reports bandwidth
  achieved through MPI one-sided Put operations. Did you try the
  regular OSU bandwidth test (which shows the performance of
  two-sided operations)? Do you see any performance difference?

If you systematically analyze the problem, you should be able to find out
what is going on.

DK

On Thu, 5 Mar 2009, Jie Cai wrote:

> We have single ConnectX dual port HCA cluster installed, and try to
> build a dual port multirail IB cluster.
>
> I have tested to run OSU put bandwidth test on the cluster with MVAPICH2.
>
> mpirun_rsh -ssh -np 2 node02 node01 MV2_NUM_HCAS=1 MV2_NUM_PORTS=2
> MV2_NUM_QP_PER_PORT=1 ./osu_bandwidth
>
> However, I didn't achieve bandwidth improvement. The peak bandwidth I
> got for the test is 1458.93 MB/s, which is far from the expectation
> (2.5GB/s).
>
> Does anyone knows what's going on?
>
> --
> Jie Cai
>
>
>
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>



More information about the mvapich-discuss mailing list