[mvapich-discuss] Re: [mvapich] Announcing the Release of MVAPICH2 1.4RC1

Nilesh Awate nilesha at cdac.in
Wed Jun 3 02:25:17 EDT 2009


Dear Sir,

Regarding to our previous  mailing conversation about compilation error 
with --enable-g=all.

 mvapich2-1.2-2009-05-12 was supposed to be fixed that bug but  still i  
encountered following error while compiling for udapl device with above 
flag.

/home/htdg/pn_mpi/mvapich2-1.2-2009-05-12/lib/libmpich.a(ch3u_handle_connection.o)(.text+0x8e): 
In function `MPIDI_CH3U_Handle_connection':
/home/htdg/pn_mpi/mvapich2-1.2-2009-05-12/src/mpid/ch3/src/ch3u_handle_connection.c:63: 
undefined reference to `MPIDI_CH3_VC_GetStateString'

due to "MPIDI_CH3_VC_GetStateString" this function call which is not 
defined any where

then findout that its defination should be in following file.

"mrail/src/udapl/udapl_channel_manager.c"

#ifdef USE_DBG_LOGGING
const char *MPIDI_CH3_VC_GetStateString(MPIDI_VC_t *vc)
{
    return NULL;
}
#endif

This definition may not be correct but that error was suppressed.

Hope the appropriate definition is being added in released RC version.

with Best Regards,
Nilesh Awate





Dhabaleswar Panda wrote:
> The MVAPICH team is pleased to announce the release of MVAPICH2 1.4RC1
> with the following NEW features:
>
> - MPI 2.1 standard compliant
>
> - Based on MPICH2 1.0.8p1
>
> - Dynamic Process Management (DPM) Support with mpirun_rsh and MPD
>    - Available for OpenFabrics (IB) interface
>
> - Support for eXtended Reliable Connection (XRC)
>    - Available for OpenFabrics (IB) interface
>
> - Kernel-level single-copy intra-node communication support based on
>   LiMIC2
>    - Delivers superior intra-node performance for medium and
>      large messages
>    - Available for all interfaces (IB, iWARP and uDAPL)
>
> - Enhancement to mpirun_rsh framework for faster job startup
>   on large clusters
>    - Hierarchical ssh to nodes to speedup job startup
>    - Available for OpenFabrics (IB and iWARP), uDAPL interfaces
>      (including Solaris) and the New QLogic-InfiniPath interface
>
> - Scalable checkpoint-restart with mpirun_rsh framework
> - Checkpoint-restart with intra-node shared memory (kernel-level with
>   LiMIC2) support
>    - Available for OpenFabrics (IB) Interface
>
> - K-nomial tree-based solution together with shared memory-based
>   broadcast for scalable MPI_Bcast operation
>    - Available for all interfaces (IB, iWARP and uDAPL)
>
> - Native support for QLogic InfiniPath
>    - Provides support over PSM interface
>
> This release also contains multiple bug fixes since MVAPICH2-1.2p1. A
> summary of the major fixes are as follows:
>
>   - Changed parameters for iWARP for increased scalability
>
>   - Fix error with derived datatypes and Put and Accumulate operations
>
>   - Unregister stale memory registrations earlier to prevent
>     malloc failures
>
>   - Fix for compilation issues with --enable-g=mem and --enable-g=all
>
>   - Change dapl_prepost_noop_extra value from 5 to 8 to prevent
>     credit flow issues
>
>   - Re-enable RGET (RDMA Read) functionality
>
>   - Fix SRQ Finalize error
>
>   - Fix a multi-rail one-sided error when multiple QPs are used
>
>   - PMI Lookup name failure with SLURM
>
>   - Port auto-detection failure when the 1st HCA did
>     not have an active failure
>
>   - MPE support for shared memory collectives now available
>
> For downloading MVAPICH2 1.4RC1, associated user guide and accessing
> the SVN, please visit the following URL:
>
> http://mvapich.cse.ohio-state.edu
>
> All feedbacks, including bug reports and hints for performance tuning,
> patches and enhancements are welcome. Please post it to the
> mvapich-discuss mailing list.
>
> Thanks,
>
> The MVAPICH Team
>
>
> _______________________________________________
> mvapich mailing list
> mvapich at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich
>
>   



More information about the mvapich-discuss mailing list