[Mvapich] Announcing the release of MVAPICH-Plus 3.0b

Subramoni, Hari subramoni.1 at osu.edu
Wed Nov 1 14:17:36 EDT 2023


The MVAPICH team is pleased to announce the release of MVAPICH-Plus 3.0b. Please let me know if you have any comments or feedback.



The new MVAPICH-Plus series is an advanced version of the MVAPICH MPI library.  It is targeted to support unified MVAPICH2-GDR and MVAPICH2-X features. It is also targeted to provide optimized support for modern platforms (CPU, GPU, and interconnects) for HPC, Deep Learning, Machine Learning, Big Data and Data Science applications.



The major features and enhancements available in MVAPICH-Plus 3.0b are as follows:



    - Based on MVAPICH 3.0

    - Support for various high-performance communication fabrics

        - InfiniBand, Slingshot-10/11, Omni-Path, OPX, RoCE, and Ethernet

    - Supports naive CPU staging for small message collective operations

        - Tuned naive limits for the following systems

            - Pitzer at OSC, Owens at OSC, Ascend at OSC, Frontera at TACC, Lonestar6 at TACC,

              ThetaGPU at ALCF, Polaris at ALCF, Tioga at LLNL

        - Initial support for blocking collectives on NVIDIA and AMD GPUs

            - Reduce_local, Reduce_scatter_block

    - Initial support for blocking collectives on NVIDIA and AMD GPUs

        - Allgather, Allgatherv, Allreduce, Alltoall, Alltoallv, Bcast, Gather,

          Gatherv, Reduce, Reduce_scatter, Scatter, Scatterv

    - Initial support for non-blocking GPU collectives on NVIDIA and AMD GPUs

        - Iallgather, Iallgatherv, Iallreduce, Ialltoall, Ialltoallv, Ibcast,

          Igather, Igatherv, Ireduce, Ireduce_scatter, Iscatter, Iscatterv

    - Enhanced support for blocking GPU to GPU point-to-point operations on

      NVIDIA and AMD GPUs

        - Send, Recv

        - NVIDIA GDRCopy, AMD LargeBar support

        - CUDA and ROCM IPC support

    - Alpha support for non-blocking GPU to GPU point-to-point operations on

      NVIDIA and AMD GPUs

        - Isend, Irecv

    - Tested with

        - Various HPC applications, mini-applications, and benchmarks

        - MPI4cuML (a custom cuML package with MPI support)

    - Tested with CUDA <= 11.8 and CUDA 12.0

    - Tested with ROCM <= 5.6.0



For downloading MVAPICH-Plus 3.0b library and associated user guide, please visit the following URL:



http://mvapich.cse.ohio-state.edu



All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu<mailto:mvapich-discuss at lists.osu.edu>).



Thanks,



The MVAPICH Team



PS: We are also happy to inform that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,732,000 (1.73 million).  The MVAPICH team would like to thank all its users and organizations!!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osu.edu/pipermail/mvapich/attachments/20231101/926dbe77/attachment-0003.html>


More information about the Mvapich mailing list