[mvapich-discuss] cuda-aware mpi peformance

feiyulv at mail.ustc.edu.cn feiyulv at mail.ustc.edu.cn
Fri Feb 6 09:41:01 EST 2015


Hi,Jiri
Thank you for your reply and sorry for replying so late. The CUDA version is 6.5, and GPU is NVIDIA k20m. I can't access to the machine now since the winter vocation. I will profile the program as soon as I go back to school. Thanks again for your suggestion.


feiyu





Hi Feiyu,

Which CUDA version and GPUs you are using? Could you try running this with the NVIDIA profiler and compare the time lines of the run with CUDA IPC disabled MV2_CUDA_IPC=0 and CUDA IPC enabled MV2_CUDA_IPC=1? I suspected that you do not get the expected overlapping behavior. The timeline could confirm this.

You can find more information on how to use nvprof and nvvp for MPI+CUDA applications here:

http://devblogs.nvidia.com/parallelforall/cuda-pro-tip-profiling-mpi-applications/

(sorry for the advertising :-))

Hope this helps

Jiri

Sent from my smartphone. Please excuse autocorrect typos.



---- feiyulv at mail.ustc.edu.cn schrieb ----


Hi, all
 I met a strange problem when transferring data between different GPUs in one server. It seems the transfer speed is influenced by the gpu compute task. The MPI transfer bandwidth drops to 4.3GB/s when gpu is busy. However when gpu is idle, the bandwidth can reach to 5.3GB/s. Any suggestions will be appreciated.


PS: The above bandwidth values were evaluated when MV2_CUDA_IPC=0. When I set MV2_CUDA_IPC=0, the performance droped more seriously especially between p2p gpus.


Thank you
feiyu

-----------------------------------------------------------------------------------
Nvidia GmbH
Würselen
Amtsgericht Aachen
HRB 8361
Managing Director: Karen Theresa Burns

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20150206/34cef417/attachment.html>


More information about the mvapich-discuss mailing list