[Hidl-announce] Announcing the release of MPI4cuML 0.5

Panda, Dhabaleswar panda at cse.ohio-state.edu
Sat Nov 12 09:15:32 EST 2022

The High-Performance Deep Learning (HiDL) team is pleased to announce
the release of MPI4cuML 0.5, which is a custom version of the cuML and
associated RAFT libraries with support for the MVAPICH2 high-performance CUDA-aware communication backend. The communication handle in cuML uses mpi4py over the MVAPICH2-GDR library and targets modern HPC clusters built with GPUs and high-performance interconnects.

This release of the cuML package is equipped with the following

* MPI4cuML 0.5:

    - Based on cuML 22.02.00
        - Include ready-to-use examples for KMeans, Linear Regression,
          Nearest Neighbors, and tSVD
    - MVAPICH2 support for RAFT 22.02.00
        - Enabled cuML’s communication engine, RAFT, to use MVAPICH2-GDR backend for
          Python and C++ cuML applications
           - KMeans, PCA, tSVD, RF, LinearModels
        - Added switch between available communication backends (MVAPICH2 and NCCL)
    - Built on top of mpi4py over the MVAPICH2-GDR library
    - Tested with
        - Mellanox InfiniBand adapters (FDR and HDR)
        - Various x86-based multi-core platforms (AMD and Intel)
        - NVIDIA A100, V100, and P100 GPUs

For downloading the MPI4cuML package and the associated user guide,
please visit the following URL:


Sample performance numbers for MPI4cuML using machine learning
application benchmarks can be viewed by visiting the `Performance' tab
of the above website.

All questions, feedback and bug reports are welcome. Please post to
hidl-discuss at lists.osu.edu.


The High-Performance Deep Learning (HiDL) Team

PS: The number of organizations using the HiDL stack has crossed 75
(from 39 countries).  The HiDL team would like to thank all its users
and organizations!!

More information about the Hidl-announce mailing list