[mvapich-discuss] Announcing the release of MVAPICH2-GDR 2.2rc1

Panda, Dhabaleswar panda at cse.ohio-state.edu
Sat May 28 18:36:15 EDT 2016


The MVAPICH team is pleased to announce the release of MVAPICH2-GDR
2.2rc1.

MVAPICH2-GDR 2.2rc1 is based on the standard MVAPICH2 2.2rc1 release
and incorporates designs that take advantage of the GPUDirect RDMA
technology for inter-node data movement on NVIDIA GPUs clusters with
Mellanox InfiniBand interconnect. Further, MVAPICH2-GDR 2.2rc1
provides efficient intra-node CUDA-Aware managed memory communication
and support for RDMA_CM, RoCE-V1, and RoCE-V2.

Features, Enhancements, and Bug Fixes for MVAPICH2-GDR 2.2rc1 are
listed here.

* Features and Enhancements (since MVAPICH2-GDR 2.2b)
    - Based on MVAPICH2 2.2rc1
    - Support for high-performance non-blocking send operations from GPU buffers
    - Enhancing Intranode CUDA-Aware Managed Memory communication using a new
      CUDA-IPC-based design
    - Adding support for RDMA_CM communication
    - Introducing support for RoCE-V1 and RoCE-V2
    - Introducing GPU-based tuning framework for Bcast and Gather operations

* Bug Fixes (since MVAPICH2-GDR 2.2b):
    - Properly handle socket/numa node binding
    - Remove the usage of default stream during the communication
    - Fix compile warnings
    - Properly handle out of WQE scenarios
    - Fix memory leaks in multicast code path

MVAPICH2-GDR 2.2rc1 release requires the following software to be
installed on your system:

  - Mellanox OFED 2.1 or later
  - NVIDIA Driver 331.20 or later
  - NVIDIA CUDA Toolkit 6.0 or later
  - Plugin module to enable GPUDirect RDMA
  - (Strongly recommended) NVIDIA GDRCOPY module

Further, MVAPICH2-GDR 2.2rc1 enables support on GPU-Cluster using
regular OFED (without GPUDirect RDMA).

For downloading MVAPICH2-GDR 2.2rc1, associated user guide, and sample
performance numbers please visit the following URL:

http://mvapich.cse.ohio-state.edu

All questions, feedback, bug reports, hints for performance tuning,
and enhancements are welcome. Please post it to the mvapich-discuss
mailing list (mvapich-discuss at cse.ohio-state.edu).

Thanks,

The MVAPICH Team

PS: We are also happy to inform that the number of organizations using
MVAPICH2 libraries (and registered at the MVAPICH site) has crossed
2,600 world-wide (in 81 countries). The number of downloads from the
MVAPICH site has crossed 377,000 (0.37 million).  The MVAPICH team
would like to thank all its users and organizations!!



More information about the mvapich-discuss mailing list