[mvapich-discuss] Allreduce time when using MPI+OpenMP is too large comparing to when using MPI alone

Sarunya Pumma sarunya at vt.edu
Wed Mar 8 00:41:00 EST 2017


Hello Mamzi,

I have additional information for you. I have run the same program by using
MPICH and I did not observe the same behavior as I saw when using MVAPICH.

I have attached the graph here

[image: Inline image 1]

>From the graph, the *OMP + MPI *time is very similar to the *MPI with 1
proc*.

Please let me know if you need more information

Thank you very much for your time

Best,
Sarunya

On Mon, Mar 6, 2017 at 3:40 PM, Sarunya Pumma <sarunya at vt.edu> wrote:

> Hi Mamzi,
>
> Thank you very much for your response.
>
> I used MPI_Init(&argc, &argv) in my code. There are the OpenMP threads
> running in the background for the OMP+MPI implementation. Here is my code:
>
> #pragma omp parallel
> {
>   int num = omp_get _num_threads()
>   printf("Number of threads %d\n", num);
> }
>
> for (int i = 0; i < iter; i++) {
>   MPI_Allreduce(msg_s, msg_r, count, MPI_FLOAT, MPI_SUM, MPI_COMM_WORLD);
> }
>
> Note that if I comment the #pragma omp parallel out and compile the code
> with the -openmp flag, I will observe the similar performance for MPI
> with 1 proc and OMP+MPI with 1 proc.
>
> Please let me know if you need more information
>
> Thank you very much
>
> Best,
> Sarunya
>
>
> On Mon, Mar 6, 2017 at 3:29 PM, Bayatpour, Mamzi <
> bayatpour.1 at buckeyemail.osu.edu> wrote:
>
>>
>> Hello Sarunya,
>>
>> We've been looking at the issue you reported internally. However, we are
>> not able to reproduce the performance trends you reported. We observed
>> similar performance for MPI with 1 process per node and MPI+OMP with 1
>> process per node.
>>
>> Could you please provide more details about the application that you are
>> testing? Are you using MPI_SINGLE_THREAD or MPI_MULTI_THREAD for your
>> MPI_Init function? Are any OpenMP threads running in the background during
>> the MPI_Allreduce call? A small reproducer could help us a lot.
>>
>> Thanks,
>> Mamzi
>>
>> ------------------------------
>> *From:* Bayatpour, Mamzi
>> *Sent:* Friday, March 3, 2017 10:12:24 PM
>> *To:* mvapich-discuss at cse.ohio-state.edu
>> *Cc:* sarunya at vt.edu
>> *Subject:* Re: [mvapich-discuss] Allreduce time when using MPI+OpenMP is
>> too large comparing to when using MPI alone
>>
>>
>> Hello Sarunya,
>>
>> Thanks for reporting the issue to us. We are taking a look at it and will
>> get back to you soon.
>>
>> Thanks,
>> Mamzi
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20170308/32c806c4/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Pasted image at 2017_03_08 12_32 AM.png
Type: image/png
Size: 17285 bytes
Desc: not available
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20170308/32c806c4/attachment-0001.png>


More information about the mvapich-discuss mailing list