[mvapich-discuss] Bandwidth on single hca dual port multirail configuration

Jie Cai Jie.Cai at cs.anu.edu.au
Wed Mar 4 19:13:42 EST 2009


Matthew Koop wrote:
> Hi Jie,
>
> You are running into the limitation of the PCIe 1.1 bus here.  Even a
> single port with a higher bus speed (ConnectX on PCIe 2.0) can get higher
> bandwidth than a single port on PCIe 1.1.
>
> I hope this helps,
>
> Matt
>   

Thanks for the reply. The workstation I am using is Sun Ultra24,
which has 2x 16 PCI-E 2.0 slots in it. I connect HCA in one of those slots.

The theoretical system bus would be ~10GB/s (on the data sheet, didn't
measure them myself yet).

So, the system bus may not be the bottleneck.
Is there some other factors would affect this?

Jie
> On Thu, 5 Mar 2009, Jie Cai wrote:
>
>   
>> We have single ConnectX dual port HCA cluster installed, and try to
>> build a dual port multirail IB cluster.
>>
>> I have tested to run OSU put bandwidth test on the cluster with MVAPICH2.
>>
>> mpirun_rsh -ssh -np 2 node02 node01 MV2_NUM_HCAS=1 MV2_NUM_PORTS=2
>> MV2_NUM_QP_PER_PORT=1 ./osu_bandwidth
>>
>> However, I didn't achieve bandwidth improvement. The peak bandwidth I
>> got for the test is 1458.93 MB/s, which is far from the expectation
>> (2.5GB/s).
>>
>> Does anyone knows what's going on?
>>
>> --
>> Jie Cai
>>
>>
>>
>>
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>>     
>
>   


More information about the mvapich-discuss mailing list