[mvapich-discuss] Announcing the release of MVAPICH2 2.3a

Panda, Dhabaleswar panda at cse.ohio-state.edu
Wed Mar 29 23:33:03 EDT 2017


The MVAPICH team is pleased to announce the release of MVAPICH2 2.3a.

Features and enhancements for MVAPICH2 2.3a are as follows:

* Features and Enhancements (since MVAPICH2 2.2-GA):
    - Based on and ABI compatible with MPICH 3.2
    - Support collective offload using Mellanox's SHArP for Allreduce
        - Enhance tuning framework for Allreduce using SHArP
    - Introduce capability to run MPI jobs across multiple InfiniBand subnets
    - Introduce basic support for executing MPI jobs in Singularity
    - Enhance collective tuning for Intel Knight's Landing and Intel Omni-path
    - Enhance process mapping support for multi-threaded MPI applications
        - Introduce MV2_CPU_BINDING_POLICY=hybrid
        - Introduce MV2_THREADS_PER_PROCESS
    - On-demand connection management for PSM-CH3 and PSM2-CH3 channels
    - Enhance PSM-CH3 and PSM2-CH3 job startup to use non-blocking PMI calls
    - Enhance debugging support for PSM-CH3 and PSM2-CH3 channels
    - Improve performance of architecture detection
    - Introduce run time parameter MV2_SHOW_HCA_BINDING to show process to HCA
      bindings
    - Enhance MV2_SHOW_CPU_BINDING to enable display of CPU bindings on all
      nodes
    - Deprecate OFA-IB-Nemesis channel
    - Update to hwloc version 1.11.6

* Bug Fixes (since MVAPICH2 2.2-GA):
    - Fix issue with ring startup in multi-rail systems
    - Fix startup issue with SLURM and PMI-1
        - Thanks to Manuel Rodriguez for the report
    - Fix startup issue caused by fix for bash `shellshock' bug
    - Fix issue with very large messages in PSM
    - Fix issue with singleton jobs and PMI-2
        - Thanks to Adam T. Moody at LLNL for the report
    - Fix incorrect reporting of non-existing files with Luster ADIO
        - Thanks to Wei Kang at NWU for the report
    - Fix hang in MPI_Probe
        - Thanks to John Westlund at Intel for the report
    - Fix issue while setting affinity with Torque Cgroups
        - Thanks to Doug Johnson at OSC for the report
    - Fix runtime errors observed when running MVAPICH2 on aarch64 platforms
        - Thanks to Sreenidhi Bharathkar Ramesh at Broadcom for posting
          the original patch
        - Thanks to Michal Schmidt at RedHat for re-posting it
    - Fix failure in mv2_show_cpu_affinity with affinity disabled
        - Thanks to Carlos Rosales-Fernandez at TACC for the report
    - Fix mpirun_rsh error when running short-lived non-MPI jobs
        - Thanks to Kevin Manalo at OSC for the report
    - Fix comment and spelling mistake
        - Thanks to Maksym Planeta for the report
    - Ignore cpusets and cgroups that may have been set by resource manager
        - Thanks to Adam T. Moody at LLNL for the report and the patch
    - Fix reduce tuning table entry for 2ppn 2node
    - Fix compilation issues due to inline keyword with GCC 5 and newer
    - Fix compilation warnings and memory leaks

For downloading MVAPICH2 2.3a and associated user guides, quick start
guide, and accessing the SVN, please visit the following URL:

http://mvapich.cse.ohio-state.edu

MVAPICH2 2.3a release provides excellent job start-up performance
(only 5.8 sec for MPI_Init and 21 sec for Hello World for 64K
processes on KNL+Omni-Path cluster).  More details on the job-startup
performance can be obtained from the following URL:

http://mvapich.cse.ohio-state.edu/performance/job-startup/

All questions, feedback, bug reports, hints for performance tuning,
patches and enhancements are welcome. Please post it to the
mvapich-discuss mailing list (mvapich-discuss at cse.ohio-state.edu).

Thanks,

The MVAPICH Team

PS: We are also happy to inform that the number of organizations using
MVAPICH2 libraries (and registered at the MVAPICH site) has crossed
2,750 worldwide (in 83 countries). The number of downloads from the
MVAPICH site has crossed 412,000 (0.41 million).  The MVAPICH team
would like to thank all its users and organizations!!



More information about the mvapich-discuss mailing list