[mvapich-discuss] Announcing the release of MVAPICH2 2.2rc1, MVAPICH2-X 2.2rc1, OMB 5.3 and OSU INAM 0.9

Panda, Dhabaleswar panda at cse.ohio-state.edu
Wed Mar 30 17:04:27 EDT 2016


The MVAPICH team is pleased to announce the release of MVAPICH2
2.2rc1, MVAPICH2-X 2.2rc1 (Advanced MPI Features, Support for OSU INAM
and Hybrid MPI+PGAS (OpenSHMEM, UPC, CAF and UPC++) with Unified
Communication Runtime), OSU Micro-Benchmarks (OMB) 5.3, and OSU
InfiniBand Network Analysis and Monitoring (INAM) Tool 0.9.

Features and enhancements for MVAPICH2 2.2rc1 are as follows:

* Features and Enhancements (since 2.2b):
    - Support for OpenPower architecture
        - Optimized inter-node and intra-node communication
    - Support for Intel Omni-Path architecture
        - Thanks to Intel for contributing the patch
        - Introduction of a new PSM2 channel for Omni-Path
    - Support for RoCEv2
    - Architecture detection for PSC Bridges system with Omni-Path
    - Enhanced startup performance and reduced memory footprint for storing
      InfiniBand end-point information with SLURM
        - Support for shared memory based PMI operations
        - Availability of an updated patch from the MVAPICH project website
          with this support for SLURM installations
    - Optimized pt-to-pt and collective tuning for Chameleon InfiniBand
      systems at TACC/UoC
    - Enable affinity by default for TrueScale(PSM) and Omni-Path(PSM2)
      channels
    - Enhanced tuning for shared-memory based MPI_Bcast
    - Enhanced debugging support and error messages
    - Update to hwloc version 1.11.2

* Bug Fixes (since 2.2b):
    - Fix issue in some of the internal algorithms used for MPI_Bcast,
      MPI_Alltoall and MPI_Reduce
    - Fix hang in one of the internal algorithms used for MPI_Scatter
        - Thanks to Ivan Raikov at Stanford for reporting this issue
    - Fix issue with rdma_connect operation
    - Fix issue with Dynamic Process Management feature
    - Fix issue with de-allocating InfiniBand resources in blocking mode
    - Fix build errors caused due to improper compile time guards
        - Thanks to Adam Moody at LLNL for the report
    - Fix finalize hang when running in hybrid or UD-only mode
        - Thanks to Jerome Vienne at TACC for reporting this issue
    - Fix issue in MPI_Win_flush operation
        - Thanks to Nenad Vukicevic for reporting this issue
    - Fix out of memory issues with non-blocking collectives code
        - Thanks to Phanisri Pradeep Pratapa and Fang Liu at GaTech for
          reporting this issue
    - Fix fall-through bug in external32 pack
        - Thanks to Adam Moody at LLNL for the report and patch
    - Fix issue with on-demand connection establishment and blocking mode
        - Thanks to Maksym Planeta at TU Dresden for the report
    - Fix memory leaks in hardware multicast based broadcast code
    - Fix memory leaks in TrueScale(PSM) channel
    - Fix compilation warnings

MVAPICH2-X 2.2rc1 provides support for advanced MPI features (User
Mode Memory Registration and Non-blocking Collectives with
Core-Direct), OSU INAM and hybrid MPI+PGAS (UPC, OpenSHMEM, CAF, and
UPC++) programming models with unified communication runtime for
emerging exascale systems. This library also provides flexibility for
users to write applications using the following programming models
with a unified communication runtime: MPI, MPI+OpenMP, pure UPC, pure
OpenSHMEM, pure UPC++, and pure CAF programs as well as hybrid
MPI(+OpenMP) + PGAS (UPC, OpenSHMEM, CAF, and UPC++) programs.

Features and enhancements for MVAPICH2-X 2.2rc1 are as follows:

* Features and Enhancements (since 2.2b):
    - Introducing UPC++ Support
        - Based on Berkeley UPC++ v0.1
        - Introduce UPC++ level support for new scatter collective
          operation (upcxx_scatter)
        - Optimized UPC collectives (improved performance for
          upcxx_reduce, upcxx_bcast, upcxx_gather, upcxx_allgather,
          upcxx_alltoall)

    - MPI Features
        - Based on MVAPICH2 2.2rc1 (OFA-IB-CH3 interface)
        - Support for OpenPower architecture
        - Support for Intel Omni-Path architecture
        - Support for RoCE v2

    - UPC Features
        - Based on GASNET v1.26
        - Support for OpenPower architecture
        - Support for RoCE v2

    - OpenSHMEM Features
        - Support for OpenPower architecture
        - Support for RoCE v2

    - CAF Features
        - Support for RoCE v2

    - Hybrid Program Features
        - Introduce support for hybrid MPI+UPC++ applications
        - Support OpenPower architecture for hybrid MPI+UPC and
          MPI+OpenSHMEM applications

    - Unified Runtime Features
        - Based on MVAPICH2 2.2rc1 (OFA-IB-CH3 interface). All the runtime
          features enabled by default in OFA-IB-CH3 and OFA-IB-RoCE interface
          of MVAPICH2 2.2rc1 are available in MVAPICH2-X 2.2rc1
        - Introduce support for UPC++ and MPI+UPC++ programming models

    - Support for OSU InfiniBand Network Analysis and Management (OSU INAM)
      Tool v0.9
        - Capability to profile and report process to node communication matrix
          for MPI processes at user specified granularity in conjunction with
          OSU INAM
        - Capability to classify data flowing over a network link at job level
          and process level granularity in conjunction with OSU INAM

* Bug Fixes (since 2.2b):
    - Fix compilation warnings and memory leaks

OSU INAM monitors InfiniBand clusters in real time by querying various
subnet management entities in the network. It is also capable of
interacting with the MVAPICH2-X software stack to gain insights into
the communication pattern of the application and classify the data
transferred into Point-to-Point, Collective and Remote Memory Access
(RMA). OSU INAM can also remotely monitor several parameters of MPI
processes in conjunction with MVAPICH2-X.

* Major Features (since 0.8.5):
    - Significant enhancements to user interface to enable scaling to clusters
      with thousands of nodes
    - Improve database insert times by using 'bulk inserts'
    - Capability to look up list of nodes communicating through a network link
    - Capability to classify data flowing over a network link at job level and
      process level granularity in conjunction with MVAPICH2-X 2.2rc1
    - Capability to profile and report process to node communication matrix
      for MPI processes at user specified granularity in conjunction with
      MVAPICH2-X 2.2rc1

* Bug Fixes (since 0.8.5):
    - Fix memory leaks in the OSU INAM daemon

New features, enhancements and bug fixes for OSU Micro-Benchmarks
(OMB) 5.3 are listed here.

* New Features & Enhancements
    - Introduce new UPC++ Benchmarks
        * osu_upcxx_allgather
        * osu_upcxx_alltoall
        * osu_upcxx_async_copy_get
        * osu_upcxx_async_copy_put
        * osu_upcxx_bcast
        * osu_upcxx_gather
        * osu_upcxx_reduce
        * osu_upcxx_scatter

* Bug Fixes
    - Determine page size at runtime in OpenSHMEM benchmarks (fixes issue seen
      on OpenPower machines)

For downloading MVAPICH2 2.2rc1, MVAPICH2-X 2.2rc1, OSU INAM v0.9, OMB
5.3 and associated user guides, quick start guide, and accessing the
SVN, please visit the following URL:

http://mvapich.cse.ohio-state.edu

All questions, feedback, bug reports, hints for performance tuning,
patches and enhancements are welcome. Please post it to the
mvapich-discuss mailing list (mvapich-discuss at cse.ohio-state.edu).

Thanks,

The MVAPICH Team

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20160330/743e31be/attachment.html>


More information about the mvapich-discuss mailing list