[mvapich-discuss] Performance difference in MPI_Allreduce calls betweem MVAPICH2-GDR and OpenMPI

Awan, Ammar Ahmad awan.10 at buckeyemail.osu.edu
Wed Jan 23 14:13:48 EST 2019


Hi Yussuf,

Sorry to hear that you are seeing performance degradation. I have a few questions and suggestions.

Can you kindly let us know if this is a DGX-2 system? If not, please share some more details like the GPU topology and the availability of NVLink(s) on your system.

We have some new designs for the DGX-2 system that will be available in the next MVAPICH2-GDR release. The new designs provide much better performance.

In the meantime, is it possible for us to get access to your system? This will enable us to help you in a better and faster manner.

Thanks,
Ammar


On Tue, Jan 22, 2019 at 8:13 PM Yussuf Ali <yussuf.ali at jaea.go.jp<mailto:yussuf.ali at jaea.go.jp>> wrote:
Dear MVAPICH developers and users,

in our software we noticed a performance degradation in the MPI_Allreduce calls when using MVAPICH-GDR compared to OpenMPI.
The software (Krylov solver) runs several iterations and in each iteration data is reduced two times using MPI_Allreduce.
The send and receive buffers are both allocated as device memory on the GPU. We measured the total time of the MPI_Allreduce calls.

16 GPU case (V100)

MVAPICH2-GDR(2.3)
1. MPI_Allreduce :  0.27 seconds
2. MPI_Allreduce:  1.9 seconds

OpenMPI
1. MPI_Allreduce: 0.10 seconds
2. MPI_Allreduce; 0.19 seconds

The data sizes are:
1. MPI_Allreduce: 720 byte
2. MPI_Allreduce: 1,160 byte

Are there any parameters to tune the MPI_Allreduce performance in MVAPICH-GDR?

Thank you for your help,
Yussuf
_______________________________________________
mvapich-discuss mailing list
mvapich-discuss at cse.ohio-state.edu<mailto:mvapich-discuss at cse.ohio-state.edu>
http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20190123/e4e82dcc/attachment.html>


More information about the mvapich-discuss mailing list