[mvapich-discuss] (no subject)

Hari Subramoni subramoni.1 at osu.edu
Tue Feb 23 14:39:11 EST 2016


Hello Mehmet,

As you say, InfiniBand HCA should not have any impact on intra-node
communication performance as long as shared memory support is enabled. I've
a few follow up questions.

1. Did you use the same process to core mapping for both runs? Could you
please re-run after setting MV2_SHOW_CPU_BINDING=1 and MV2_SHOW_ENV_INFO=2
2. Can you please send the output of mpiname -a

Thx,
Hari.

On Tue, Feb 23, 2016 at 2:33 PM, Mehmet Belgin <mehmet.belgin at oit.gatech.edu
> wrote:

> --===============3882789371153662688==
> Content-Type: multipart/alternative;
>         boundary="------------030101020503070006010004"
>
> --------------030101020503070006010004
> Content-Type: text/plain; charset="utf-8"; format=flowed
> Content-Transfer-Encoding: 7bit
>
> Greetings!
>
> I am troubleshooting a slowness issue on a single 16core node. Compared
> to profiling data I had from earlier, I can very clearly see that the
> slowness is caused by MPI routines (MPI send rate dropped from 21M/s to
> 13M/s) for the very same code. The memory and CPU profiles of the code
> look identical.
>
> I was wondering if IB problems would have any impact at all, despite the
> fact that I am not using network (using single node). I would not expect
> it to be a factor, but asking just in case. I will now run a few OSU
> benchmarks, but will appreciate any other suggestions you might have.
>
> (using mvapich2/2.1 with intel/15.0 on a 16-core intel node)
>
> Thanks!
> -Mehmet
>
> --------------030101020503070006010004
> Content-Type: text/html; charset="utf-8"
> Content-Transfer-Encoding: 7bit
>
> <html>
>   <head>
>
>     <meta http-equiv="content-type" content="text/html; charset=utf-8">
>   </head>
>   <body bgcolor="#FFFFFF" text="#000000">
>     <font face="Helvetica, Arial, sans-serif">Greetings! <br>
>       <br>
>       I am troubleshooting a slowness issue on a single 16core node.
>       Compared to profiling data I had from earlier, I can very clearly
>       see that the slowness is caused by MPI routines (MPI send rate
>       dropped from 21M/s to 13M/s) for the very same code. </font><font
>       face="Helvetica, Arial, sans-serif"><font face="Helvetica, Arial,
>         sans-serif">The memory and CPU profiles of the code look
>         identical.<br>
>         <br>
>       </font>I was wondering if IB problems would have any impact at
>       all, despite the fact that I am not using network (using single
>       node). I would not expect it to be a factor, but asking just in
>       case. </font><font face="Helvetica, Arial, sans-serif"><font
>         face="Helvetica, Arial, sans-serif">I will now run a few OSU
>         benchmarks, but </font>will appreciate any other suggestions
>       you might have.<br>
>       <br>
>       (using mvapich2/2.1 with intel/15.0 on a 16-core intel node)<br>
>       <br>
>       Thanks!<br>
>       -Mehmet</font>
>   </body>
> </html>
>
> --------------030101020503070006010004--
>
> --===============3882789371153662688==
> Content-Type: text/plain; charset="us-ascii"
> MIME-Version: 1.0
> Content-Transfer-Encoding: 7bit
> Content-Disposition: inline
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
> --===============3882789371153662688==--
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20160223/8a0e0fb5/attachment-0001.html>


More information about the mvapich-discuss mailing list