[mvapich-discuss] cuda-aware mpi peformance

Jiri Kraus jkraus at nvidia.com
Thu Feb 5 09:44:29 EST 2015


Hi Feiyu,

Which CUDA version and GPUs you are using? Could you try running this with the NVIDIA profiler and compare the time lines of the run with CUDA IPC disabled MV2_CUDA_IPC=0 and CUDA IPC enabled MV2_CUDA_IPC=1? I suspected that you do not get the expected overlapping behavior. The timeline could confirm this.

You can find more information on how to use nvprof and nvvp for MPI+CUDA applications here:

http://devblogs.nvidia.com/parallelforall/cuda-pro-tip-profiling-mpi-applications/

(sorry for the advertising :-))

Hope this helps

Jiri

Sent from my smartphone. Please excuse autocorrect typos.


---- feiyulv at mail.ustc.edu.cn schrieb ----

Hi, all
 I met a strange problem when transferring data between different GPUs in one server. It seems the transfer speed is influenced by the gpu compute task. The MPI transfer bandwidth drops to 4.3GB/s when gpu is busy. However when gpu is idle, the bandwidth can reach to 5.3GB/s. Any suggestions will be appreciated.

PS: The above bandwidth values were evaluated when MV2_CUDA_IPC=0. When I set MV2_CUDA_IPC=0, the performance droped more seriously especially between p2p gpus.

Thank you
feiyu

NVIDIA GmbH, Wuerselen, Germany, Amtsgericht Aachen, HRB 8361
Managing Director: Karen Theresa Burns

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20150205/3cc4f3a4/attachment.html>


More information about the mvapich-discuss mailing list