[mvapich-discuss] Announcing the Release of MVAPICH2 1.5RC1

Eric A. Borisch eborisch at ieee.org
Tue May 25 10:45:12 EDT 2010


There is a typo in the current src/include/mpimem.h at line 331:
(trunk and 1.5RC1)

It is only evident with --enable-alloca

== ORIGINAL ==

#define MPIU_CHKLMEM_MALLOC_ORSTMT(pointer_,type_,nbytes_,rc_,name_,stmt_) \
{pointer_ = (type_)alloca(nbytes_); \
    if (!(pointer_) && (nbytes > 0)) {	   \
    MPIU_CHKMEM_SETERR(rc_,nbytes_,name_); \
    stmt_;\
}}

== FIXED ==

#define MPIU_CHKLMEM_MALLOC_ORSTMT(pointer_,type_,nbytes_,rc_,name_,stmt_) \
{pointer_ = (type_)alloca(nbytes_); \
    if (!(pointer_) && (nbytes_ > 0)) {	   \
    MPIU_CHKMEM_SETERR(rc_,nbytes_,name_); \
    stmt_;\
}}

== ==

It's missing the underscore after nbytes in the if() clause.

It was fixed in mpich2 a few months ago:
http://trac.mcs.anl.gov/projects/mpich2/ticket/960

Thanks,
 Eric

On Wed, May 5, 2010 at 12:08 AM, Dhabaleswar Panda
<panda at cse.ohio-state.edu> wrote:
> The MVAPICH team is pleased to announce the release of MVAPICH2 1.5RC1
> with the following NEW features:
>
> - MPI 2.2 standard compliant
> - Based on MPICH2 1.2.1p1
> - OFA-IB-Nemesis interface design
>    - OpenFabrics InfiniBand network module support for
>      MPICH2 Nemesis modular design
>    - Support for high-performance intra-node shared memory
>      communication provided by the Nemesis design
>    - Adaptive RDMA Fastpath with Polling Set for high-performance
>      inter-node communication
>    - Shared Receive Queue (SRQ) support with flow control,
>      uses significantly less memory for MPI library
>    - Header caching
>    - Advanced AVL tree-based Resource-aware registration cache
>    - Memory Hook Support provided by integration with ptmalloc2
>      library. This provides safe release of memory to the
>      Operating System and is expected to benefit the memory
>      usage of applications that heavily use malloc and free operations.
>    - Support for TotalView debugger
>    - Shared Library Support for existing binary MPI application
>      programs to run ROMIO Support for MPI-IO
>    - Support for additional features (such as hwloc,
>      hierarchical collectives, one-sided, multithreading, etc.),
>      as included in the MPICH2 1.2.1p1 Nemesis channel
> - Flexible process manager support
>    - mpirun_rsh to work with any of the eight interfaces
>      (CH3 and Nemesis channel-based) including OFA-IB-Nemesis,
>      TCP/IP-CH3 and TCP/IP-Nemesis
>    - Hydra process manager to work with any of the eight interfaces
>      (CH3 and Nemesis channel-based) including OFA-IB-CH3,
>      OFA-iWARP-CH3, OFA-RoCE-CH3 and TCP/IP-CH3
> - MPIEXEC_TIMEOUT is honored by mpirun_rsh
>
> This release also contains multiple bug fixes since MVAPICH2-1.4.1 A
> summary of the major fixes are as follows:
>
> - Fix compilation error when configured with
>  `--enable-thread-funneled'
> - Fix MPE functionality, thanks to Anthony Chan <chan at mcs.anl.gov> for
>  reporting and providing the resolving patch
> - Cleanup after a failure in the init phase is handled better by
>  mpirun_rsh
> - Path determination is correctly handled by mpirun_rsh when DPM is
>  used
> - Shared libraries are correctly built (again)
>
> For downloading MVAPICH2 1.5RC1, associated user guide and accessing
> the SVN, please visit the following URL:
>
> http://mvapich.cse.ohio-state.edu
>
> All feedbacks, including bug reports and hints for performance tuning,
> patches and enhancements are welcome. Please post it to the
> mvapich-discuss mailing list.
>
> We are also happy to inform that the number of organizations using
> MVAPICH/MVAPICH2 (and registered at the MVAPICH site) has crossed
> 1,100 world-wide (in 58 countries). The MVAPICH team extends
> thanks to all these organizations.
>
> Thanks,
>
> The MVAPICH Team
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>



-- 
Eric A. Borisch
eborisch at ieee.org

Howard Roark laughed.



More information about the mvapich-discuss mailing list