[Mvapich-discuss] Announcing the Release of MVAPICH2 2.3.6 GA and OSU Micro-Benchmarks (OMB) 5.7.1

Subramoni, Hari subramoni.1 at osu.edu
Tue May 11 20:08:58 EDT 2021


The MVAPICH team is pleased to announce the release of MVAPICH2 2.3.6 GA.

Features and enhancements for MVAPICH2 2.3.6 GA are as follows:

* Features and Enhancements (since 2.3.5):
    - Support collective offload using Mellanox's SHARP for Reduce and Bcast
        - Enhanced tuning framework for Reduce and Bcast using SHARP
    - Enhanced performance for UD-Hybrid code
    - Add multi-rail support for UD-Hybrid code
    - Enhanced performance for shared-memory collectives
    - Enhanced job-startup performance for flux job launcher
    - Add support in mpirun_rsh to use srun daemons to launch jobs
    - Add support in mpirun_rsh to specify processes per node using
      '-ppn' option
    - Use PMI2 by default when SLURM is selected as process manager
    - Add support to use aligned memory allocations for multi-threaded
      applications
        - Thanks to Evan J. Danish @OSC for the report
    - Architecture detection and enhanced point-to-point tuning for
      Oracle BM.HPC2 cloud shape
    - Enhanced collective tuning for Frontera at TACC and Expanse at SDSC
    - Add support for GCC compiler v11
    - Add support for Intel IFX compiler
    - Update hwloc v1 code to v1.11.14
    - Update hwloc v2 code to v2.4.2

* Bug Fixes (since 2.3.5):
    - Updates to IME support in MVAPICH2
        - Thanks to Bernd Schubert and Jean-Yves Vet @DDN
          for the patch
    - Improve error reporting in dlopen code path
        - Thanks to Matthew W. Anderson @INL for the report
    - Fix memory leak in collectives code path
        - Thanks to Matthew W. Anderson @INL and the PETSc
          team for the report and patch
    - Fix issues in DPM code
        - Thanks to Lana Deere @D2S Inc for the report
    - Fix issues when using sys_siglist array
        - Thanks to Jorge D'Elia @Universidad Nacional Del Litoral
          in Santa Fe, Argentina for the report
    - Fix issues with GCC v11
        - Thanks to Honggang Li @RedHat for the report
    - Fix issues in Win_shared_alloc
        - Thanks to Adam Moody @LLNL for the report
    - Fix issues with HDF5 in ROMIO code
        - Thanks to Mark Dixon @Durham University for the report
    - Fix issues with srun based launch when SLURM hostfile is specified
      manually
        - Thanks to Greg Lee @LLNL for the report
    - Fix issues in UD-Hybrid code path
    - Fix issues in MPI_Win_test leading to hangs in multi-rail scenarios
    - Fix issues in job startup code leading to degraded startup performance
    - Update code to work with any number of HCAs in a graceful fashion
    - Fix hang in shared memory code with stencil applications
    - Fix segmentation fault in finalize
    - Fix compilation warnings, memory leaks, and spelling mistakes
    - Fix an issue with external32 datatypes being converted incorrectly
        - Thanks to Adam Moody @LLNL for the report

The new features, enhancements, and bug fixes for OSU Micro-Benchmarks (OMB)
5.7.1 are listed here

* New Features & Enhancements (since v5.7)
    - Enhance support for CUDA managed memory benchmarks
        - Thanks to Ian Karlin and Nathan Hanford @LLNL for the feedback
    - Add support to send and receive data from different buffers for
      osu_latency, osu_bw, osu_bibw, and osu_mbw_mr
    - Add support to print minimum and maximum communication times for
      non-blocking benchmarks

* Bug Fixes (since v5.7)
    - Update README file with updated description for osu_latency_mp
        - Thanks to Honggang Li @RedHat for the suggestion
    - Fix error in setting benchmark name in osu_allgatherv.c and
      osu_allgatherv.c
        - Thanks to Brandon Cook @LBL for the report

For downloading MVAPICH2-GDR 2.3.6 GA, OMB 5.7.1, and associated user guides,
please visit the following URL:

http://mvapich.cse.ohio-state.edu

All questions, feedback, bug reports, hints for performance tuning, patches and
enhancements are welcome. Please post it to the mvapich-discuss mailing list
(mvapich-discuss at cse.ohio-state.edu<mailto:mvapich-discuss at cse.ohio-state.edu>).

Thanks,

The MVAPICH Team

PS: We are also happy to inform that the number of organizations using MVAPICH2
libraries (and registered at the MVAPICH site) has crossed 3,150 worldwide (in
89 countries). The number of downloads from the MVAPICH site has crossed
1,363,000 (1.36 million).  The MVAPICH team would like to thank all its users
and organizations!!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osu.edu/pipermail/mvapich-discuss/attachments/20210512/351cdc5b/attachment-0021.html>


More information about the Mvapich-discuss mailing list