From panda at cse.ohio-state.edu Thu Aug 10 23:37:50 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Fri, 11 Aug 2023 03:37:50 +0000 Subject: [Mvapich] Join the MVAPICH Team for multiple presentations at the upcoming Hot Interconnect '23 conference Message-ID: The MVAPICH team members will be presenting two research papers and two tutorials at the Hot Interconnect 2023 conference, to be held in a virtual manner during August 23-25, 2023. More details of the events are provided at: https://mvapich.cse.ohio-state.edu/conference/947/talks/ The online attendance for the conference is free. Join us for these presentations and interact with the project team members!! Thanks, The MVAPICH Team From panda at cse.ohio-state.edu Thu Aug 17 22:59:20 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Fri, 18 Aug 2023 02:59:20 +0000 Subject: [Mvapich] Final Program for the MUG '23 Conference is available + Free online attendance Message-ID: It is almost time for the 11th annual MVAPICH User Group (MUG) conference that will be taking place from Aug 21 - 23, 2023. The final program is available from http://mug.mvapich.cse.ohio-state.edu/program/ The conference will take place in a hybrid manner. As indicated earlier, online attendance is free. If interested, please join us using the registration link: http://mug.mvapich.cse.ohio-state.edu/registration/ Looking forward to seeing you at the conference next week. Thanks, The MUG '23 Organizers From panda at cse.ohio-state.edu Thu Aug 31 23:36:52 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Fri, 1 Sep 2023 03:36:52 +0000 Subject: [Mvapich] Announcing the release of MPI4Spark 0.2 Message-ID: The OSU High-Performance Big Data (HiBD) team is pleased to announce the release of MPI4Spark 0.2, which is a custom version of the Apache Spark package that exploits high-performance MPI communication on modern HPC clusters that support InfiniBand, Intel Omni-Path, ROCE and HPE Slingshot interconnects for Big Data applications. The MPI communication backend in MPI4Spark uses the MVAPICH2-J Java bindings of MVAPICH2. The MPI4Spark design allows performance portability for Spark workloads across HPC clusters with different interconnects. This release of the MPI4Spark package is equipped with the following features: * MPI4Spark 0.2 Features: - Based on Apache Spark 3.0.0 - Support for the YARN Cluster Manager - Compliant with user-level Apach Spark APIs and packages - High performance design that utilizes MPI-based communication - Utilizes MPI point-to-point operations - Relies on MPI Dynamic Process Management (DPM) features for launching executor processes - Relies on Multiple-Program-Multiple-Data (MPMD) launcher mode for launching executors when using the YARN cluster manager - Built on top of the MVAPICH2-J Java bindings for MVAPICH2 family of MPI libraries - Tested with - OSU HiBD-Benchmarks, GroupBy and SortBy - Intel HiBench Suite, Micro Benchmarks, Machine Learning and Graph Workloads - Mellanox InfiniBand adapters (EDR and HDR 100G and 200G) - HPC systems with Intel OPA and Cray Slingshot interconnects - Various multi-core platforms For downloading MPI4Spark 0.2 package, the associated user guide, please visit the following URL: http://hibd.cse.ohio-state.edu Sample performance numbers for MPI4Spark using benchmarks can be viewed by visiting the `Performance' tab of the above website. All questions, feedback and bug reports are welcome. Please post to rdma-spark-discuss at lists.osu.edu. Thanks, The High-Performance Big Data (HiBD) Team http://hibd.cse.ohio-state.edu PS: The number of organizations using the HiBD stacks has crossed 360 (from 39 countries). Similarly, the number of downloads from the HiBD site has crossed 47,700. The HiBD team would like to thank all its users and organizations!!