[mvapich-discuss] cuda-aware mpi peformance
feiyulv at mail.ustc.edu.cn
feiyulv at mail.ustc.edu.cn
Wed Feb 4 10:16:09 EST 2015
Hi, all
I met a strange problem when transferring data between different GPUs in one server. It seems the transfer speed is influenced by the gpu compute task. The MPI transfer bandwidth drops to 4.3GB/s when gpu is busy. However when gpu is idle, the bandwidth can reach to 5.3GB/s. Any suggestions will be appreciated.
PS: The above bandwidth values were evaluated when MV2_CUDA_IPC=0. When I set MV2_CUDA_IPC=0, the performance droped more seriously especially between p2p gpus.
Thank you
feiyu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20150204/d93093e9/attachment.html>
More information about the mvapich-discuss
mailing list