[HiBD] Announcing the release of RDMA for Apache Hadoop-2.x 1.0.0
Panda, Dhabaleswar
panda at cse.ohio-state.edu
Thu Aug 25 03:19:55 EDT 2016
The High-Performance Big Data (HiBD) team is pleased to announce the
release of Hadoop-2.x 1.0.0 package (for Hadoop 2.x series) with the
following features.
New features compared to Hadoop-2.x 0.9.9 are:
- Memcached-based burst buffer for MapReduce over Lustre-integrated HDFS
(HHH-L-BB mode)
- Optimization of in-memory spill for Maps
- Support for Mellanox EDR HCA
Bug Fixes (compared to Hadoop-2.x 0.9.9):
- Fix an issue for running with Apache Spark benchmarks
- Fix a hang issue in HHH-L mode
- Fix an issue for running with HBase in TCP/IP mode
- Fix a hang issue in HHH-L mode
The complete set of features for RDMA Apache Hadoop-2.x 1.0.0:
- Compliant with Apache Hadoop 2.7.1 and Hortonworks Data
Platform (HDP) 2.3.0.0, and Cloudera Distribution including
Apache Hadoop (CDH) 5.6.0 APIs and applications
- Based on Apache Hadoop 2.7.1
- High performance design with native InfiniBand and RoCE support
at the verbs level for HDFS, MapReduce, and RPC components
- Plugin-based architecture supporting RDMA-based designs for
HDFS (HHH, HHH-M, HHH-L), MapReduce, MapReduce over Lustre
and RPC, etc.
- Plugin for Cloudera Distribution including Apache Hadoop
(CDH) (tested with 5.6.0)
- Plugin for Apache Hadoop distribution (tested with 2.7.1)
- Plugin for Hortonworks Data Platform (HDP) (tested with 2.3.0.0)
- Supports deploying Hadoop with Slurm and PBS in different
running modes (HHH, HHH-M, HHH-L, and MapReduce over Lustre)
- Easily configurable for different running modes (HHH, HHH-M, HHH-L,
and MapReduce over Lustre) and different protocols (native InfiniBand,
RoCE, and IPoIB)
- On-demand connection setup
- HDFS over native InfiniBand and RoCE
- RDMA-based write
- RDMA-based replication
- Parallel replication support
- Overlapping in different stages of write and replication
- Enhanced hybrid HDFS design with in-memory and heterogeneous
storage (HHH)
- Supports four modes of operations
- HHH (default) with I/O operations over RAM disk, SSD, and HDD
- HHH-M (in-memory) with I/O operations in-memory
- HHH-L (Lustre-integrated) with I/O operations in local
storage and Lustre
- HHH-L-BB (Burst Buffer) with I/O operations in Memcached-based
burst buffer (RDMA-based Memcached) over Lustre
- Policies to efficiently utilize heterogeneous storage
devices (RAM Disk, SSD, HDD, and Lustre)
- Greedy and Balanced policies support
- Automatic policy selection based on available storage types
- Hybrid replication (in-memory and persistent storage) for
HHH default mode
- Memory replication (in-memory only with lazy persistence) for
HHH-M mode
- Lustre-based fault-tolerance for HHH-L mode
- No HDFS replication
- Reduced local storage space usage
- MapReduce over native InfiniBand and RoCE
- RDMA-based shuffle
- Pre-fetching and caching of map output
- In-memory merge
- Advanced optimization in overlapping
- map, shuffle, and merge
- shuffle, merge, and reduce
- Optional disk-assisted shuffle
- Automatic Locality-aware Shuffle
- Optimization of in-memory spill for Maps
- High performance design of MapReduce over Lustre
- Supports two shuffle approaches
- Lustre read based shuffle
- RDMA based shuffle
- Hybrid shuffle based on both shuffle approaches
- Configurable distribution support
- In-memory merge and overlapping of different phases
- RPC over native InfiniBand and RoCE
- JVM-bypassed buffer management
- RDMA or send/recv based adaptive communication
- Intelligent buffer allocation and adjustment for serialization
- Tested with
- Mellanox InfiniBand adapters (DDR, QDR, FDR, and EDR)
- RoCE support with Mellanox adapters
- Various multi-core platforms
- RAM Disks, SSDs, HDDs, and Lustre
For downloading RDMA for Apache Hadoop-2.x 1.0.0 package and the
associated user guide, please visit the following URL:
http://hibd.cse.ohio-state.edu
Sample performance numbers for benchmarks using RDMA for Apache
Hadoop-2.x 1.0.0 version can be viewed by visiting the `Performance'
tab of the above website.
All questions, feedback and bug reports are welcome. Please post it to
the rdma-hadoop-discuss mailing list (rdma-hadoop-discuss at
cse.ohio-state.edu).
Thanks,
The High-Performance Big Data (HiBD) Team
PS: The number of organizations using the HiBD stacks has crossed 185
(from 26 countries). Similarly, the number of downloads from the HiBD
site has crossed 17,600. The HiBD team would like to thank all its
users and organizations!!
More information about the hibd-announce
mailing list