[mvapich-discuss]Infiniband - how do I know that it is working?

Ju JiaJia jujj603 at gmail.com
Thu May 31 20:18:48 EDT 2012


To  Devendar :
   I have run with a bigger dataset and more nodes, more difference shows.

To Mike :
   I'll try collectl. When i run using OpenMPI, i use -mca btl  switch to
select network explicitly. I think OpenMPI's tcp switch is using
 IPoIB(when IB is available in my case) as i can see RX/TX change of eth0
and ib0 in ifconfig's output.

To  teng :
   I have run on 4 nodes with a bigger dataset, more difference shows.

To  Devendar, Mike, teng :
   Thanks for your reply, very helpful.


On Fri, Jun 1, 2012 at 6:18 AM, teng ma <xiaok1981 at gmail.com> wrote:

> 2 processes on 2 nodes (2*1) maybe too little to reach the communication
> upbound. If your nodes are multicore nodes,  you can spawn 16nodes * some
> number/node  MPI processes. These traffic maybe generate some performance
> difference between ib and ethernet.
>
> Teng
> On Thu, May 31, 2012 at 12:16 PM, Ju JiaJia <jujj603 at gmail.com> wrote:
>
>> I have run osu_latency and osu_bw with both mvapich2 and mpich2 on two
>> nodes, here is my test result
>>
>>  MVAPICH2(Infiniband)MPICH2(Ethernet)0 1.4515.16 11.4915.25 21.49 15.154
>> 1.49 15.158 1.515.1716 1.5315.17 321.57 16.4641.66 16.88128 1.8317.8256
>> 2.920.39 5123.14 26.061K3.8 47.52K 4.89193.974K 5.99228.97 8K7.85 233.12
>> 16K12.51 241.3332K 18.08432.31 64K28.7675.25 128K50.02 1429.21256K 92.57
>> 2582.83 512K177.654693.17 1M347.75 9110.472M688.12 17942.074M 1373.9
>> 35609.21
>>
>> Here  is HPL results on two nodes:
>> HPL.out.ethernet.211:   WR00L2C2       20000   128     2     1
>> *1686.40*              3.163e+00
>> HPL.out.ib.211:            WR00L2C2       20000   128     2     1
>>   * 1639.99 *             3.252e+00
>>
>> run script :
>> export MV2_ENABLE_AFFINITY=0
>> PATH=/gos4/user39/jujj/program-files/mvapich2/bin:$PATH
>>
>> #
>> mpiexec -f mfile -n 2 taskset -c 0 ./xhpl > HPL.out.ib.211 2>&1
>> mpiexec -f mfile -n 2 taskset -c 0 ./xhpl_ethernet > HPL.out.ethernet.211
>> 2>&1
>>
>> As you can see, no big difference. I tested NAMD also, no big difference.
>> Is there any tools like netstat over infiniband ? So i can see whether
>> connections is built or which network is being used. Or MVAPICH2 support
>> some ways to do this? Log or something?
>>
>> On Thu, May 31, 2012 at 11:43 AM, Dhabaleswar Panda <
>> panda at cse.ohio-state.edu> wrote:
>>
>>> After installing MVAPICH2, you can run OSU MPI Micro-Benchmarks to verify
>>> that your installtion is correct. You can verify your performance with
>>> the
>>> performance numbers/graphs available at the MVAPICH site. After that you
>>> can carry out your applications-level study.
>>>
>>> DK
>>>
>>> On Thu, 31 May 2012, Ju JiaJia wrote:
>>>
>>> > Hi all:
>>> >
>>> > I am currently running HPL built using mvapich2 which use infiniband,
>>> > but it shows no apparently difference, compares in performance to a
>>> > build using mpich2 which uses Ethenet.
>>> >
>>> > There should be some improvement in performance. I doubt whether
>>> > infiniband is working. Anyone knows how to check whether infiniband is
>>> > working ?
>>> >
>>> >
>>> > Ju JiaJia
>>> >
>>>
>>>
>>
>> _______________________________________________
>> mvapich-discuss mailing list
>> mvapich-discuss at cse.ohio-state.edu
>> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20120601/291956d9/attachment-0001.html


More information about the mvapich-discuss mailing list