[mvapich-discuss] Announcing the Release of MVAPICH2 1.9b, MVAPICH2-X 1.9b and OSU Micro-Benchmarks (OMB) 3.9

Dhabaleswar Panda panda at cse.ohio-state.edu
Thu Feb 28 23:28:17 EST 2013


The MVAPICH team is pleased to announce the release of MVAPICH2 1.9b, 
MVAPICH2-X 1.9b (Hybrid MPI+PGAS with UPC and OpenSHMEM support through Unified 
Communication Runtime) and OSU Micro-Benchmarks (OMB) 3.9.

Features, Enhancements, and Bug Fixes for MVAPICH2 1.9b (since MVAPICH2 1.9a2 
release) are listed here.

* New Features and Enhancements (since 1.9a2):
     - Based on MPICH-3.0.2
         - Support for all MPI-3 features
           (Available for all interfaces: OFA-IB-CH3, OFA-iWARP-CH3,
           OFA-RoCE-CH3, uDAPL-CH3, OFA-IB-Nemesis and PSM-CH3)
     - Support for single copy intra-node communication using Linux supported
       CMA (Cross Memory Attach)
         - Provides flexibility for intra-node communication: shared memory,
           LiMIC2, and CMA
     - Checkpoint/Restart using LLNL's Scalable Checkpoint/Restart Library (SCR)
         - Support for application-level checkpointing
         - Support for hierarchical system-level checkpointing
     - Improved job startup time
         - A new runtime variable, MV2_HOMOGENEOUS_CLUSTER, for optimized
           startup on homogeneous clusters
     - New version of LiMIC2 (v0.5.6)
         - Provides support for unlocked ioctl calls
     - Tuned Reduce, Allgather, Reduce_Scatter, Allgatherv collectives
     - Introduced option to export environment variables automatically with
       mpirun_rsh
     - Updated to HWLOC v1.6.1
     - Provided option to use CUDA library call instead of CUDA driver to check
       buffer pointer type
         - Thanks to Christian Robert from Sandia for the suggestion
     - Improved debug messages and error reporting

* Bug-Fixes (since 1.9a2):
     - Fix page fault with memory access violation with LiMIC2 exposed by newer
       Linux kernels
         - Thanks to Karl Schulz from TACC for the report
     - Fix a failure when lazy memory registration is disabled and CUDA is
       enabled
         - Thanks to Jens Glaser from University of Minnesota for the report
     - Fix an issue with variable initialization related to DPM support
     - Rename a few internal variables to avoid name conflicts with external
       applications
         - Thanks to Adam Moody from LLNL for the report
     - Check for libattr during configuration when Checkpoint/Restart and
       Process Migration are requested
         - Thanks to John Gilmore from Vastech for the report
     - Fix build issue with --disable-cxx
     - Set intra-node eager threshold correctly when configured with LiMIC2
     - Fix an issue with MV2_DEFAULT_PKEY in partitioned InfiniBand network
         - Thanks to Jesper Larsen from FCOO for the report
     - Improve makefile rules to use automake macros
         - Thanks to Carmelo Ponti from CSCS for the report
     - Fix configure error with automake conditionals
         - Thanks to Evren Yurtesen from Abo Akademi for the report
     - Fix a few memory leaks and warnings
     - Properly cleanup shared memory files (used by XRC) when applications fail

For a complete set of features of MVAPICH2 1.9b (compared to 1.8), please refer 
to the following URL:

http://mvapich.cse.ohio-state.edu/overview/mvapich2/features.shtml

For a complete set of feature enhancements and bug fixes of MVAPICH2 1.9b 
(compared to 1.8), please refer to the following URL:

http://mvapich.cse.ohio-state.edu/download/mvapich2/changes-1.9.shtml

MVAPICH2-X 1.9b software package (released as a technology preview) provides 
support for hybrid MPI+PGAS (UPC and OpenSHMEM) programming models with unified 
communication runtime for emerging exascale systems. This software package 
provides flexibility for users to write applications using the following 
programming models with a unified communication runtime: MPI, MPI+OpenMP, pure 
UPC, and pure OpenSHMEM programs as well as hybrid MPI(+OpenMP) + PGAS (UPC and 
OpenSHMEM) programs.

Features and Enhancements for MVAPICH2-X 1.9b (since MVAPICH2-X 1.9a2
release) are listed here.

* New Features and Enhancements (since 1.9a2):
     - MPI Features
         - Based on MVAPICH2 1.9b (OFA-IB-CH3 interface) including MPI-3
           features. MPI programs can take advantage of all the features
           enabled by default in OFA-IB-CH3 interface of MVAPICH2 1.9b
     - OpenSHMEM Features
         - Updated to OpenSHMEM 1.0d
     * Unified Parallel C (UPC) Features
         - Updated to Berkeley UPC 2.16.0
     * Unified Runtime Features
         - Based on MVAPICH2 1.9b (OFA-IB-CH3 interface). All the runtime
           features enabled by default in OFA-IB-CH3 interface of
           MVAPICH2 1.9b are available in MVAPICH2-X 1.9b

For a complete set of features of MVAPICH2-X 1.9b, please refer to the 
following URL:

http://mvapich.cse.ohio-state.edu/overview/mvapich2x/features.shtml

For a complete set of feature enhancements and bug fixes of MVAPICH2-X
1.9b, please refer to the following URL:

http://mvapich.cse.ohio-state.edu/download/mvapich2x/changes.shtml

New features and Enhancements of OSU Micro-Benchmarks (OMB) 3.9 (since
OMB 3.8 release) are listed here.

* New Features & Enhancements
     - Support buffer allocation using OpenACC in GPU benchmarks
     - Use average time instead of max time for calculating the bandwidth
       and message rate in osu_mbw_mr
         - Thanks to Alex Mikheev from Mellanox for the patch
* Bug Fixes
     - Properly initialize host buffers for DH and HD transfers in GPU
       benchmarks

For a complete set of features of OMB 3.9, please refer to the following URL:

http://mvapich.cse.ohio-state.edu/benchmarks/

For a complete set of feature enhancements and bug fixes of OMB 3.9 (compared 
to 3.8), please refer to the following URL:

http://mvapich.cse.ohio-state.edu/svn/mpi-benchmarks/branches/3.9/CHANGES

For downloading MVAPICH2 1.9b, MVAPICH2-X 1.9b, OMB 3.9, associated
user guides, quick start guide, and accessing the SVN, please visit
the following URL:

http://mvapich.cse.ohio-state.edu

All questions, feedbacks, bug reports, hints for performance tuning,
patches and enhancements are welcome. Please post it to the
mvapich-discuss mailing list (mvapich-discuss at cse.ohio-state.edu).

Thanks,

The MVAPICH Team


More information about the mvapich-discuss mailing list