[Hidl-announce] Announcing the release of MPI4cuML 0.1
Panda, Dhabaleswar
panda at cse.ohio-state.edu
Thu Feb 18 15:38:33 EST 2021
The High-Performance Deep Learning (HiDL) team is pleased to announce
the first release of MPI4cuML 0.1, which is a custom version of the
cuML library with support for the MVAPICH2 high-performance CUDA-aware
communication backend. The communication handle in cuML uses mpi4py
over the MVAPICH2-GDR library and targets modern HPC clusters built
with GPUs and high-performance interconnects.
The first release of the cuML package is equipped with the following
features:
* MPI4cuML 0.1:
- Based on cuML 0.15
- MVAPICH2 support for C++ and Python APIs
- Included use of cuML C++ CUDA-Aware MPI example for KMeans clustering
- Enabled cuML handles to use MVAPICH2-GDR backend for Python
cuML applications
- KMeans, PCA, tSVD, RF, LinearModels
- Added switch between available communication backends
(MVAPICH2 and NCCL)
- Built on top of mpi4py over the MVAPICH2-GDR library
- Tested with
- Mellanox InfiniBand adapters
- Various x86-based multi-core platforms (AMD and Intel)
- NVIDIA V100 and P100 GPUs
For downloading the MPI4cuML package and the associated user guide,
please visit the following URL:
http://hidl.cse.ohio-state.edu
Sample performance numbers for MPI4cuML using machine learning
application benchmarks can be viewed by visiting the `Performance' tab
of the above website.
All questions, feedback, and bug reports are welcome. Please post to
hidl-discuss at lists.osu.edu.
Thanks,
The High-Performance Deep Learning (HiDL) Team
http://hidl.cse.ohio-state.edu
More information about the Hidl-announce
mailing list