[mvapich-discuss] Announcing the release of MVAPICH2-GDR 2.2 GA
Panda, Dhabaleswar
panda at cse.ohio-state.edu
Wed Oct 26 02:15:34 EDT 2016
The MVAPICH team is pleased to announce the release of MVAPICH2-GDR
2.2.
MVAPICH2-GDR 2.2 is based on the standard MVAPICH2 2.2 release and
incorporates designs that take advantage of the GPUDirect RDMA (GDR)
technology for inter-node data movement on NVIDIA GPUs clusters with
Mellanox InfiniBand interconnect. It also provides efficient
intra-node CUDA-Aware unified memory communication and support for
RDMA_CM, RoCE-V1, and RoCE-V2. Further, MVAPICH2-GDR 2.2 provides
optimized large message collectives (broadcast, reduce and allreduce)
for emerging Deep Learning frameworks.
Features, Enhancements, and Bug Fixes for MVAPICH2-GDR 2.2 are listed
here.
* Features and Enhancements (since MVAPICH2-GDR 2.2rc1)
- Based on MVAPICH2 2.2
- Add support for CUDA 8.0
- Add support for Pascal (P100) GPU
- Efficient support for CUDA-aware large message collectives
targeting Deep Learning frameworks
- Efficient support for Bcast
- Efficient support for Reduce
- Efficient support for Allreduce
- Introducing GPU-based tuning framework for Reduce collective on
Broadwell+EDR+K80 based systems
* Bug Fixes (since MVAPICH2-GDR 2.2rc1):
- Correctly guard core-direct support for GPU-based non-blocking
collectives
- Minor fixes to GPU-based tuning framework for Wilkes and CSCS
- Fix compilation warnings
MVAPICH2-GDR 2.2 release requires the following software to be
installed on your system:
- Mellanox OFED 2.1 or later
- NVIDIA Driver 331.20 or later
- NVIDIA CUDA Toolkit 6.0 or later
- Plugin module to enable GPUDirect RDMA
- (Strongly recommended) NVIDIA GDRCOPY module
Further, MVAPICH2-GDR 2.2 provides support on GPU-Cluster using
regular OFED (without GPUDirect RDMA).
For downloading MVAPICH2-GDR 2.2, associated user guide, and sample
performance numbers please visit the following URL:
http://mvapich.cse.ohio-state.edu
All questions, feedback, bug reports, hints for performance tuning,
and enhancements are welcome. Please post it to the mvapich-discuss
mailing list (mvapich-discuss at cse.ohio-state.edu).
Thanks,
The MVAPICH Team
More information about the mvapich-discuss
mailing list