From subramoni.1 at osu.edu Mon Jul 10 18:35:21 2023 From: subramoni.1 at osu.edu (Subramoni, Hari) Date: Mon, 10 Jul 2023 22:35:21 +0000 Subject: [Mvapich-discuss] Announcing the release of OSU Micro-Benchmarks (OMB) 7.2 Message-ID: The MVAPICH team is pleased to announce the release of OSU Micro-Benchmarks (OMB) 7.2. This release introduces support for MPI Sessions for various benchmarks. It also adds support for MPI_IN_PLACE and an option to rotate the root for various collective benchmarks. Please note that OMB is also available through the Spack package manager. Now the system administrators and users of OSU Micro-Benchmarks (OMB) will be able to install these libraries on their systems using Spack. The new features, enhancements, and bug fixes for OSU Micro-Benchmarks (OMB) 7.2 are listed here: * New Features & Enhancements (since 7.1) - Add MPI-4 sessions based initialization support to following benchmarks * Point-to-point benchmarks supported * osu_bibw, osu_bw, osu_mbw_mr, osu_latency, osu_multi_lat, * osu_latency_mp, osu_latency_mt, osu_bw_persistent, * osu_bibw_persistent, osu_latency_persistent * Blocking benchmarks supported * osu_allgather, osu_allgatherv, osu_alltoall, osu_allreduce, * osu_alltoallv, osu_alltoallw, osu_bcast, osu_barrier, osu_gather, * osu_gatherv, osu_reduce, osu_reduce_scatter, osu_scatter, * osu_scatterv * Non-Blocking benchmarks supported * osu_iallgather, osu_iallgatherv, osu_iallreduce, osu_ialltoall, * osu_ialltoallv, osu_ialltoallw, osu_ibcast, osu_ibarrier, * osu_igather, osu_igatherv, osu_ireduce, osu_iscatter, * osu_iscatterv, osu_ireduce_scatter * Neighborhood benchmarks * osu_neighbor_allgather, osu_neighbor_allgatherv, * osu_neighbor_alltoall,osu_neighbor_alltoallv, * osu_neighbor_alltoallw, osu_ineighbor_allgatherv, * osu_ineighbor_allgatherv, osu_ineighbor_alltoall, * osu_ineighbor_alltoallv, osu_ineighbor_alltoallw * Startup benchmarks * osu_init - Add MPI_IN_PLACE support for following blocking and non-blocking collectives * Blocking benchmarks supported * osu_allgather, osu_allgatherv, osu_alltoall, osu_allreduce, * osu_alltoallv, osu_alltoallw, osu_gather, osu_gatherv, osu_reduce, * osu_reduce_scatter, osu_scatter, osu_scatterv * Non-Blocking benchmarks supported * osu_iallgather, osu_iallgatherv, osu_iallreduce, osu_ialltoall, * osu_ialltoallv, osu_ialltoallw, osu_igather, osu_igatherv, * osu_ireduce, osu_iscatter, osu_iscatterv, osu_ireduce_scatter - Add an option to set root rank for rooted blocking and non-blocking collectives * Blocking benchmarks supported * osu_gather, osu_gatherv, osu_reduce, osu_scatter, * osu_scatterv * Non-Blocking benchmarks supported * osu_igather, osu_igatherv, osu_ireduce, osu_iscatter, * osu_iscatterv * Bug Fixes - Fixed memory leak in point-to-point benchmarks when validation is enabled. * Thanks to Shi Jin @Amazon for report and patch. - Fixed missing '#' formatting bug in osu_ibarrier header. * Thanks to Nick Hagerty @ORNL for report. For downloading OMB 7.2 and associated README instructions, please visit the following URL: http://mvapich.cse.ohio-state.edu All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu). Thanks, The MVAPICH Team PS: We are also happy to inform you that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,689,000 (1.68 million). The MVAPICH team would like to thank all its users and organizations!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From panda at cse.ohio-state.edu Sat Jul 15 09:57:49 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Sat, 15 Jul 2023 13:57:49 +0000 Subject: [Mvapich-discuss] Join the MVAPICH team for multiple events at PEARC '23 In-Reply-To: References: Message-ID: The MVAPICH team members will be participating in multiple events at the PEARC '23 conference, to be held in Oregon, USA, during July 23-27, 2023. More details of the events are provided at: https://mvapich.cse.ohio-state.edu/conference/937/talks/ Join us for these presentations and interact with the project team members!! Thanks, The MVAPICH Team From rrahaman6 at gatech.edu Wed Jul 19 14:50:23 2023 From: rrahaman6 at gatech.edu (Rahaman, Ronald O) Date: Wed, 19 Jul 2023 18:50:23 +0000 Subject: [Mvapich-discuss] Mellanox OFED version compatibility Message-ID: Hi all, We?re moving some of our nodes to RHEL 9 soon, and we want to make sure we install compatible Mellanox OFED drivers. The available versions of MLNX_OFED are: 5.4, 5.8, and 23.04. Which versions does MVAPICH2 2.3.x support? Many thanks, Ron -------- Ron Rahaman Research Scientist II, Research Software Engineer Partnership for an Advanced Computing Environment (PACE) Georgia Institute of Technology -------------- next part -------------- An HTML attachment was scrubbed... URL: From panda at cse.ohio-state.edu Wed Jul 19 20:40:10 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Thu, 20 Jul 2023 00:40:10 +0000 Subject: [Mvapich-discuss] Announcing the release of MVAPICH-Plus 3.0a2 In-Reply-To: References: Message-ID: The MVAPICH team is pleased to announce the release of MVAPICH-Plus 3.0a2. The new MVAPICH-Plus series is an advanced version of the MVAPICH MPI library. It is targeted to support unified MVAPICH2-GDR and MVAPICH2-X features. It is also targeted to provide optimized support for modern platforms (CPU, GPU, and interconnects) for HPC, Deep Learning, Machine Learning, Big Data and Data Science applications. The major features and enhancements available in MVAPICH-Plus 3.0a2 are as follows: - Based on MVAPICH 3.0 - Support for various high-performance communication fabrics - InfiniBand, Slingshot-10/11, Omni-Path, OPX, RoCE, and Ethernet - Support naive CPU staging approach for collectives for small messages - Tune naive limits for the following systems - Pitzer at OSC, Owens at OSC, Ascend at OSC, Frontera at TACC, Lonestar6 at TACC, ThetaGPU at ALCF, Polaris at ALCF, Tioga at LLNL - Initial support for blocking collectives on NVIDIA and AMD GPUs - Reduce_local, Reduce_scatter_block - Initial support for blocking collectives on NVIDIA and AMD GPUs - Allgather, Allgatherv, Allreduce, Alltoall, Alltoallv, Bcast, Gather, Gatherv, Reduce, Reduce_scatter, Scatter, Scatterv - Initial support for non-blocking GPU collectives on NVIDIA and AMD GPUs - Iallgather, Iallgatherv, Iallreduce, Ialltoall, Ialltoallv, Ibcast, Igather, Igatherv, Ireduce, Ireduce_scatter, Iscatter, Iscatterv - Initial support for blocking GPU to GPU point-to-point operations on NVIDIA and AMD GPUs - Send, Recv - Alpha support for non-blocking GPU to GPU point-to-point operations on NVIDIA and AMD GPUs - Isend, Irecv - Tested with - Various HPC applications, mini-applications, and benchmarks - MPI4cuML (a custom cuML package with MPI support) - Tested with CUDA <= 11.6 and CUDA 12.0 - Tested with ROCM <= 5.6.0 For downloading MVAPICH-Plus 3.0a2 library and associated user guide, please visit the following URL: http://mvapich.cse.ohio-state.edu All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu). Thanks, The MVAPICH Team PS: We are also happy to inform that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,691,000 (1.69 million). The MVAPICH team would like to thank all its users and organizations!! From panda at cse.ohio-state.edu Thu Jul 27 23:22:08 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Fri, 28 Jul 2023 03:22:08 +0000 Subject: [Mvapich-discuss] Online attendance for the MUG '23 conference is now free Message-ID: The MVAPICH User Group (MUG) conference organizers have put together an excellent program for the 11th annual MUG '23 conference. The conference will be held during August 21-23, 2023. It will be a hybrid event. The preliminary program is available from http://mug.mvapich.cse.ohio-state.edu/program/ Thanks to multiple organizations (Broadcom, Cornelis Networks, US-National Science Foundation (NSF), NVIDIA, Ohio Supercomputer Center, Ohio State University, Ohio State University/Translational Data Analytics Institute, ParaTools, and X-ScaleSolutions) for extending sponsorships to this conference!! These sponsorships have helped us to waive the registration fee for all online attendees (non-speakers). All interested parties (faculty, students, engineers, software developers, managers, etc.) can now attend the conference online for free. Please register for the conference using the registration link available from http://mug.mvapich.cse.ohio-state.edu/registration/ The Zoom link will be sent to all registered attendees a few days before the conference starts. If you have any questions, please send a note to mug at cse.ohio-state.edu. Thanks, The MUG '23 Organizers From subramoni.1 at osu.edu Mon Jul 10 18:35:21 2023 From: subramoni.1 at osu.edu (Subramoni, Hari) Date: Mon, 10 Jul 2023 22:35:21 +0000 Subject: [Mvapich-discuss] Announcing the release of OSU Micro-Benchmarks (OMB) 7.2 Message-ID: The MVAPICH team is pleased to announce the release of OSU Micro-Benchmarks (OMB) 7.2. This release introduces support for MPI Sessions for various benchmarks. It also adds support for MPI_IN_PLACE and an option to rotate the root for various collective benchmarks. Please note that OMB is also available through the Spack package manager. Now the system administrators and users of OSU Micro-Benchmarks (OMB) will be able to install these libraries on their systems using Spack. The new features, enhancements, and bug fixes for OSU Micro-Benchmarks (OMB) 7.2 are listed here: * New Features & Enhancements (since 7.1) - Add MPI-4 sessions based initialization support to following benchmarks * Point-to-point benchmarks supported * osu_bibw, osu_bw, osu_mbw_mr, osu_latency, osu_multi_lat, * osu_latency_mp, osu_latency_mt, osu_bw_persistent, * osu_bibw_persistent, osu_latency_persistent * Blocking benchmarks supported * osu_allgather, osu_allgatherv, osu_alltoall, osu_allreduce, * osu_alltoallv, osu_alltoallw, osu_bcast, osu_barrier, osu_gather, * osu_gatherv, osu_reduce, osu_reduce_scatter, osu_scatter, * osu_scatterv * Non-Blocking benchmarks supported * osu_iallgather, osu_iallgatherv, osu_iallreduce, osu_ialltoall, * osu_ialltoallv, osu_ialltoallw, osu_ibcast, osu_ibarrier, * osu_igather, osu_igatherv, osu_ireduce, osu_iscatter, * osu_iscatterv, osu_ireduce_scatter * Neighborhood benchmarks * osu_neighbor_allgather, osu_neighbor_allgatherv, * osu_neighbor_alltoall,osu_neighbor_alltoallv, * osu_neighbor_alltoallw, osu_ineighbor_allgatherv, * osu_ineighbor_allgatherv, osu_ineighbor_alltoall, * osu_ineighbor_alltoallv, osu_ineighbor_alltoallw * Startup benchmarks * osu_init - Add MPI_IN_PLACE support for following blocking and non-blocking collectives * Blocking benchmarks supported * osu_allgather, osu_allgatherv, osu_alltoall, osu_allreduce, * osu_alltoallv, osu_alltoallw, osu_gather, osu_gatherv, osu_reduce, * osu_reduce_scatter, osu_scatter, osu_scatterv * Non-Blocking benchmarks supported * osu_iallgather, osu_iallgatherv, osu_iallreduce, osu_ialltoall, * osu_ialltoallv, osu_ialltoallw, osu_igather, osu_igatherv, * osu_ireduce, osu_iscatter, osu_iscatterv, osu_ireduce_scatter - Add an option to set root rank for rooted blocking and non-blocking collectives * Blocking benchmarks supported * osu_gather, osu_gatherv, osu_reduce, osu_scatter, * osu_scatterv * Non-Blocking benchmarks supported * osu_igather, osu_igatherv, osu_ireduce, osu_iscatter, * osu_iscatterv * Bug Fixes - Fixed memory leak in point-to-point benchmarks when validation is enabled. * Thanks to Shi Jin @Amazon for report and patch. - Fixed missing '#' formatting bug in osu_ibarrier header. * Thanks to Nick Hagerty @ORNL for report. For downloading OMB 7.2 and associated README instructions, please visit the following URL: http://mvapich.cse.ohio-state.edu All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu). Thanks, The MVAPICH Team PS: We are also happy to inform you that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,689,000 (1.68 million). The MVAPICH team would like to thank all its users and organizations!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From panda at cse.ohio-state.edu Sat Jul 15 09:57:49 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Sat, 15 Jul 2023 13:57:49 +0000 Subject: [Mvapich-discuss] Join the MVAPICH team for multiple events at PEARC '23 In-Reply-To: References: Message-ID: The MVAPICH team members will be participating in multiple events at the PEARC '23 conference, to be held in Oregon, USA, during July 23-27, 2023. More details of the events are provided at: https://mvapich.cse.ohio-state.edu/conference/937/talks/ Join us for these presentations and interact with the project team members!! Thanks, The MVAPICH Team From rrahaman6 at gatech.edu Wed Jul 19 14:50:23 2023 From: rrahaman6 at gatech.edu (Rahaman, Ronald O) Date: Wed, 19 Jul 2023 18:50:23 +0000 Subject: [Mvapich-discuss] Mellanox OFED version compatibility Message-ID: Hi all, We?re moving some of our nodes to RHEL 9 soon, and we want to make sure we install compatible Mellanox OFED drivers. The available versions of MLNX_OFED are: 5.4, 5.8, and 23.04. Which versions does MVAPICH2 2.3.x support? Many thanks, Ron -------- Ron Rahaman Research Scientist II, Research Software Engineer Partnership for an Advanced Computing Environment (PACE) Georgia Institute of Technology -------------- next part -------------- An HTML attachment was scrubbed... URL: From panda at cse.ohio-state.edu Wed Jul 19 20:40:10 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Thu, 20 Jul 2023 00:40:10 +0000 Subject: [Mvapich-discuss] Announcing the release of MVAPICH-Plus 3.0a2 In-Reply-To: References: Message-ID: The MVAPICH team is pleased to announce the release of MVAPICH-Plus 3.0a2. The new MVAPICH-Plus series is an advanced version of the MVAPICH MPI library. It is targeted to support unified MVAPICH2-GDR and MVAPICH2-X features. It is also targeted to provide optimized support for modern platforms (CPU, GPU, and interconnects) for HPC, Deep Learning, Machine Learning, Big Data and Data Science applications. The major features and enhancements available in MVAPICH-Plus 3.0a2 are as follows: - Based on MVAPICH 3.0 - Support for various high-performance communication fabrics - InfiniBand, Slingshot-10/11, Omni-Path, OPX, RoCE, and Ethernet - Support naive CPU staging approach for collectives for small messages - Tune naive limits for the following systems - Pitzer at OSC, Owens at OSC, Ascend at OSC, Frontera at TACC, Lonestar6 at TACC, ThetaGPU at ALCF, Polaris at ALCF, Tioga at LLNL - Initial support for blocking collectives on NVIDIA and AMD GPUs - Reduce_local, Reduce_scatter_block - Initial support for blocking collectives on NVIDIA and AMD GPUs - Allgather, Allgatherv, Allreduce, Alltoall, Alltoallv, Bcast, Gather, Gatherv, Reduce, Reduce_scatter, Scatter, Scatterv - Initial support for non-blocking GPU collectives on NVIDIA and AMD GPUs - Iallgather, Iallgatherv, Iallreduce, Ialltoall, Ialltoallv, Ibcast, Igather, Igatherv, Ireduce, Ireduce_scatter, Iscatter, Iscatterv - Initial support for blocking GPU to GPU point-to-point operations on NVIDIA and AMD GPUs - Send, Recv - Alpha support for non-blocking GPU to GPU point-to-point operations on NVIDIA and AMD GPUs - Isend, Irecv - Tested with - Various HPC applications, mini-applications, and benchmarks - MPI4cuML (a custom cuML package with MPI support) - Tested with CUDA <= 11.6 and CUDA 12.0 - Tested with ROCM <= 5.6.0 For downloading MVAPICH-Plus 3.0a2 library and associated user guide, please visit the following URL: http://mvapich.cse.ohio-state.edu All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu). Thanks, The MVAPICH Team PS: We are also happy to inform that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,691,000 (1.69 million). The MVAPICH team would like to thank all its users and organizations!! From panda at cse.ohio-state.edu Thu Jul 27 23:22:08 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Fri, 28 Jul 2023 03:22:08 +0000 Subject: [Mvapich-discuss] Online attendance for the MUG '23 conference is now free Message-ID: The MVAPICH User Group (MUG) conference organizers have put together an excellent program for the 11th annual MUG '23 conference. The conference will be held during August 21-23, 2023. It will be a hybrid event. The preliminary program is available from http://mug.mvapich.cse.ohio-state.edu/program/ Thanks to multiple organizations (Broadcom, Cornelis Networks, US-National Science Foundation (NSF), NVIDIA, Ohio Supercomputer Center, Ohio State University, Ohio State University/Translational Data Analytics Institute, ParaTools, and X-ScaleSolutions) for extending sponsorships to this conference!! These sponsorships have helped us to waive the registration fee for all online attendees (non-speakers). All interested parties (faculty, students, engineers, software developers, managers, etc.) can now attend the conference online for free. Please register for the conference using the registration link available from http://mug.mvapich.cse.ohio-state.edu/registration/ The Zoom link will be sent to all registered attendees a few days before the conference starts. If you have any questions, please send a note to mug at cse.ohio-state.edu. Thanks, The MUG '23 Organizers From subramoni.1 at osu.edu Mon Jul 10 18:35:21 2023 From: subramoni.1 at osu.edu (Subramoni, Hari) Date: Mon, 10 Jul 2023 22:35:21 +0000 Subject: [Mvapich-discuss] Announcing the release of OSU Micro-Benchmarks (OMB) 7.2 Message-ID: The MVAPICH team is pleased to announce the release of OSU Micro-Benchmarks (OMB) 7.2. This release introduces support for MPI Sessions for various benchmarks. It also adds support for MPI_IN_PLACE and an option to rotate the root for various collective benchmarks. Please note that OMB is also available through the Spack package manager. Now the system administrators and users of OSU Micro-Benchmarks (OMB) will be able to install these libraries on their systems using Spack. The new features, enhancements, and bug fixes for OSU Micro-Benchmarks (OMB) 7.2 are listed here: * New Features & Enhancements (since 7.1) - Add MPI-4 sessions based initialization support to following benchmarks * Point-to-point benchmarks supported * osu_bibw, osu_bw, osu_mbw_mr, osu_latency, osu_multi_lat, * osu_latency_mp, osu_latency_mt, osu_bw_persistent, * osu_bibw_persistent, osu_latency_persistent * Blocking benchmarks supported * osu_allgather, osu_allgatherv, osu_alltoall, osu_allreduce, * osu_alltoallv, osu_alltoallw, osu_bcast, osu_barrier, osu_gather, * osu_gatherv, osu_reduce, osu_reduce_scatter, osu_scatter, * osu_scatterv * Non-Blocking benchmarks supported * osu_iallgather, osu_iallgatherv, osu_iallreduce, osu_ialltoall, * osu_ialltoallv, osu_ialltoallw, osu_ibcast, osu_ibarrier, * osu_igather, osu_igatherv, osu_ireduce, osu_iscatter, * osu_iscatterv, osu_ireduce_scatter * Neighborhood benchmarks * osu_neighbor_allgather, osu_neighbor_allgatherv, * osu_neighbor_alltoall,osu_neighbor_alltoallv, * osu_neighbor_alltoallw, osu_ineighbor_allgatherv, * osu_ineighbor_allgatherv, osu_ineighbor_alltoall, * osu_ineighbor_alltoallv, osu_ineighbor_alltoallw * Startup benchmarks * osu_init - Add MPI_IN_PLACE support for following blocking and non-blocking collectives * Blocking benchmarks supported * osu_allgather, osu_allgatherv, osu_alltoall, osu_allreduce, * osu_alltoallv, osu_alltoallw, osu_gather, osu_gatherv, osu_reduce, * osu_reduce_scatter, osu_scatter, osu_scatterv * Non-Blocking benchmarks supported * osu_iallgather, osu_iallgatherv, osu_iallreduce, osu_ialltoall, * osu_ialltoallv, osu_ialltoallw, osu_igather, osu_igatherv, * osu_ireduce, osu_iscatter, osu_iscatterv, osu_ireduce_scatter - Add an option to set root rank for rooted blocking and non-blocking collectives * Blocking benchmarks supported * osu_gather, osu_gatherv, osu_reduce, osu_scatter, * osu_scatterv * Non-Blocking benchmarks supported * osu_igather, osu_igatherv, osu_ireduce, osu_iscatter, * osu_iscatterv * Bug Fixes - Fixed memory leak in point-to-point benchmarks when validation is enabled. * Thanks to Shi Jin @Amazon for report and patch. - Fixed missing '#' formatting bug in osu_ibarrier header. * Thanks to Nick Hagerty @ORNL for report. For downloading OMB 7.2 and associated README instructions, please visit the following URL: http://mvapich.cse.ohio-state.edu All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu). Thanks, The MVAPICH Team PS: We are also happy to inform you that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,689,000 (1.68 million). The MVAPICH team would like to thank all its users and organizations!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From panda at cse.ohio-state.edu Sat Jul 15 09:57:49 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Sat, 15 Jul 2023 13:57:49 +0000 Subject: [Mvapich-discuss] Join the MVAPICH team for multiple events at PEARC '23 In-Reply-To: References: Message-ID: The MVAPICH team members will be participating in multiple events at the PEARC '23 conference, to be held in Oregon, USA, during July 23-27, 2023. More details of the events are provided at: https://mvapich.cse.ohio-state.edu/conference/937/talks/ Join us for these presentations and interact with the project team members!! Thanks, The MVAPICH Team From rrahaman6 at gatech.edu Wed Jul 19 14:50:23 2023 From: rrahaman6 at gatech.edu (Rahaman, Ronald O) Date: Wed, 19 Jul 2023 18:50:23 +0000 Subject: [Mvapich-discuss] Mellanox OFED version compatibility Message-ID: Hi all, We?re moving some of our nodes to RHEL 9 soon, and we want to make sure we install compatible Mellanox OFED drivers. The available versions of MLNX_OFED are: 5.4, 5.8, and 23.04. Which versions does MVAPICH2 2.3.x support? Many thanks, Ron -------- Ron Rahaman Research Scientist II, Research Software Engineer Partnership for an Advanced Computing Environment (PACE) Georgia Institute of Technology -------------- next part -------------- An HTML attachment was scrubbed... URL: From panda at cse.ohio-state.edu Wed Jul 19 20:40:10 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Thu, 20 Jul 2023 00:40:10 +0000 Subject: [Mvapich-discuss] Announcing the release of MVAPICH-Plus 3.0a2 In-Reply-To: References: Message-ID: The MVAPICH team is pleased to announce the release of MVAPICH-Plus 3.0a2. The new MVAPICH-Plus series is an advanced version of the MVAPICH MPI library. It is targeted to support unified MVAPICH2-GDR and MVAPICH2-X features. It is also targeted to provide optimized support for modern platforms (CPU, GPU, and interconnects) for HPC, Deep Learning, Machine Learning, Big Data and Data Science applications. The major features and enhancements available in MVAPICH-Plus 3.0a2 are as follows: - Based on MVAPICH 3.0 - Support for various high-performance communication fabrics - InfiniBand, Slingshot-10/11, Omni-Path, OPX, RoCE, and Ethernet - Support naive CPU staging approach for collectives for small messages - Tune naive limits for the following systems - Pitzer at OSC, Owens at OSC, Ascend at OSC, Frontera at TACC, Lonestar6 at TACC, ThetaGPU at ALCF, Polaris at ALCF, Tioga at LLNL - Initial support for blocking collectives on NVIDIA and AMD GPUs - Reduce_local, Reduce_scatter_block - Initial support for blocking collectives on NVIDIA and AMD GPUs - Allgather, Allgatherv, Allreduce, Alltoall, Alltoallv, Bcast, Gather, Gatherv, Reduce, Reduce_scatter, Scatter, Scatterv - Initial support for non-blocking GPU collectives on NVIDIA and AMD GPUs - Iallgather, Iallgatherv, Iallreduce, Ialltoall, Ialltoallv, Ibcast, Igather, Igatherv, Ireduce, Ireduce_scatter, Iscatter, Iscatterv - Initial support for blocking GPU to GPU point-to-point operations on NVIDIA and AMD GPUs - Send, Recv - Alpha support for non-blocking GPU to GPU point-to-point operations on NVIDIA and AMD GPUs - Isend, Irecv - Tested with - Various HPC applications, mini-applications, and benchmarks - MPI4cuML (a custom cuML package with MPI support) - Tested with CUDA <= 11.6 and CUDA 12.0 - Tested with ROCM <= 5.6.0 For downloading MVAPICH-Plus 3.0a2 library and associated user guide, please visit the following URL: http://mvapich.cse.ohio-state.edu All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu). Thanks, The MVAPICH Team PS: We are also happy to inform that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,691,000 (1.69 million). The MVAPICH team would like to thank all its users and organizations!! From panda at cse.ohio-state.edu Thu Jul 27 23:22:08 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Fri, 28 Jul 2023 03:22:08 +0000 Subject: [Mvapich-discuss] Online attendance for the MUG '23 conference is now free Message-ID: The MVAPICH User Group (MUG) conference organizers have put together an excellent program for the 11th annual MUG '23 conference. The conference will be held during August 21-23, 2023. It will be a hybrid event. The preliminary program is available from http://mug.mvapich.cse.ohio-state.edu/program/ Thanks to multiple organizations (Broadcom, Cornelis Networks, US-National Science Foundation (NSF), NVIDIA, Ohio Supercomputer Center, Ohio State University, Ohio State University/Translational Data Analytics Institute, ParaTools, and X-ScaleSolutions) for extending sponsorships to this conference!! These sponsorships have helped us to waive the registration fee for all online attendees (non-speakers). All interested parties (faculty, students, engineers, software developers, managers, etc.) can now attend the conference online for free. Please register for the conference using the registration link available from http://mug.mvapich.cse.ohio-state.edu/registration/ The Zoom link will be sent to all registered attendees a few days before the conference starts. If you have any questions, please send a note to mug at cse.ohio-state.edu. Thanks, The MUG '23 Organizers From subramoni.1 at osu.edu Mon Jul 10 18:35:21 2023 From: subramoni.1 at osu.edu (Subramoni, Hari) Date: Mon, 10 Jul 2023 22:35:21 +0000 Subject: [Mvapich-discuss] Announcing the release of OSU Micro-Benchmarks (OMB) 7.2 Message-ID: The MVAPICH team is pleased to announce the release of OSU Micro-Benchmarks (OMB) 7.2. This release introduces support for MPI Sessions for various benchmarks. It also adds support for MPI_IN_PLACE and an option to rotate the root for various collective benchmarks. Please note that OMB is also available through the Spack package manager. Now the system administrators and users of OSU Micro-Benchmarks (OMB) will be able to install these libraries on their systems using Spack. The new features, enhancements, and bug fixes for OSU Micro-Benchmarks (OMB) 7.2 are listed here: * New Features & Enhancements (since 7.1) - Add MPI-4 sessions based initialization support to following benchmarks * Point-to-point benchmarks supported * osu_bibw, osu_bw, osu_mbw_mr, osu_latency, osu_multi_lat, * osu_latency_mp, osu_latency_mt, osu_bw_persistent, * osu_bibw_persistent, osu_latency_persistent * Blocking benchmarks supported * osu_allgather, osu_allgatherv, osu_alltoall, osu_allreduce, * osu_alltoallv, osu_alltoallw, osu_bcast, osu_barrier, osu_gather, * osu_gatherv, osu_reduce, osu_reduce_scatter, osu_scatter, * osu_scatterv * Non-Blocking benchmarks supported * osu_iallgather, osu_iallgatherv, osu_iallreduce, osu_ialltoall, * osu_ialltoallv, osu_ialltoallw, osu_ibcast, osu_ibarrier, * osu_igather, osu_igatherv, osu_ireduce, osu_iscatter, * osu_iscatterv, osu_ireduce_scatter * Neighborhood benchmarks * osu_neighbor_allgather, osu_neighbor_allgatherv, * osu_neighbor_alltoall,osu_neighbor_alltoallv, * osu_neighbor_alltoallw, osu_ineighbor_allgatherv, * osu_ineighbor_allgatherv, osu_ineighbor_alltoall, * osu_ineighbor_alltoallv, osu_ineighbor_alltoallw * Startup benchmarks * osu_init - Add MPI_IN_PLACE support for following blocking and non-blocking collectives * Blocking benchmarks supported * osu_allgather, osu_allgatherv, osu_alltoall, osu_allreduce, * osu_alltoallv, osu_alltoallw, osu_gather, osu_gatherv, osu_reduce, * osu_reduce_scatter, osu_scatter, osu_scatterv * Non-Blocking benchmarks supported * osu_iallgather, osu_iallgatherv, osu_iallreduce, osu_ialltoall, * osu_ialltoallv, osu_ialltoallw, osu_igather, osu_igatherv, * osu_ireduce, osu_iscatter, osu_iscatterv, osu_ireduce_scatter - Add an option to set root rank for rooted blocking and non-blocking collectives * Blocking benchmarks supported * osu_gather, osu_gatherv, osu_reduce, osu_scatter, * osu_scatterv * Non-Blocking benchmarks supported * osu_igather, osu_igatherv, osu_ireduce, osu_iscatter, * osu_iscatterv * Bug Fixes - Fixed memory leak in point-to-point benchmarks when validation is enabled. * Thanks to Shi Jin @Amazon for report and patch. - Fixed missing '#' formatting bug in osu_ibarrier header. * Thanks to Nick Hagerty @ORNL for report. For downloading OMB 7.2 and associated README instructions, please visit the following URL: http://mvapich.cse.ohio-state.edu All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu). Thanks, The MVAPICH Team PS: We are also happy to inform you that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,689,000 (1.68 million). The MVAPICH team would like to thank all its users and organizations!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From panda at cse.ohio-state.edu Sat Jul 15 09:57:49 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Sat, 15 Jul 2023 13:57:49 +0000 Subject: [Mvapich-discuss] Join the MVAPICH team for multiple events at PEARC '23 In-Reply-To: References: Message-ID: The MVAPICH team members will be participating in multiple events at the PEARC '23 conference, to be held in Oregon, USA, during July 23-27, 2023. More details of the events are provided at: https://mvapich.cse.ohio-state.edu/conference/937/talks/ Join us for these presentations and interact with the project team members!! Thanks, The MVAPICH Team From rrahaman6 at gatech.edu Wed Jul 19 14:50:23 2023 From: rrahaman6 at gatech.edu (Rahaman, Ronald O) Date: Wed, 19 Jul 2023 18:50:23 +0000 Subject: [Mvapich-discuss] Mellanox OFED version compatibility Message-ID: Hi all, We?re moving some of our nodes to RHEL 9 soon, and we want to make sure we install compatible Mellanox OFED drivers. The available versions of MLNX_OFED are: 5.4, 5.8, and 23.04. Which versions does MVAPICH2 2.3.x support? Many thanks, Ron -------- Ron Rahaman Research Scientist II, Research Software Engineer Partnership for an Advanced Computing Environment (PACE) Georgia Institute of Technology -------------- next part -------------- An HTML attachment was scrubbed... URL: From panda at cse.ohio-state.edu Wed Jul 19 20:40:10 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Thu, 20 Jul 2023 00:40:10 +0000 Subject: [Mvapich-discuss] Announcing the release of MVAPICH-Plus 3.0a2 In-Reply-To: References: Message-ID: The MVAPICH team is pleased to announce the release of MVAPICH-Plus 3.0a2. The new MVAPICH-Plus series is an advanced version of the MVAPICH MPI library. It is targeted to support unified MVAPICH2-GDR and MVAPICH2-X features. It is also targeted to provide optimized support for modern platforms (CPU, GPU, and interconnects) for HPC, Deep Learning, Machine Learning, Big Data and Data Science applications. The major features and enhancements available in MVAPICH-Plus 3.0a2 are as follows: - Based on MVAPICH 3.0 - Support for various high-performance communication fabrics - InfiniBand, Slingshot-10/11, Omni-Path, OPX, RoCE, and Ethernet - Support naive CPU staging approach for collectives for small messages - Tune naive limits for the following systems - Pitzer at OSC, Owens at OSC, Ascend at OSC, Frontera at TACC, Lonestar6 at TACC, ThetaGPU at ALCF, Polaris at ALCF, Tioga at LLNL - Initial support for blocking collectives on NVIDIA and AMD GPUs - Reduce_local, Reduce_scatter_block - Initial support for blocking collectives on NVIDIA and AMD GPUs - Allgather, Allgatherv, Allreduce, Alltoall, Alltoallv, Bcast, Gather, Gatherv, Reduce, Reduce_scatter, Scatter, Scatterv - Initial support for non-blocking GPU collectives on NVIDIA and AMD GPUs - Iallgather, Iallgatherv, Iallreduce, Ialltoall, Ialltoallv, Ibcast, Igather, Igatherv, Ireduce, Ireduce_scatter, Iscatter, Iscatterv - Initial support for blocking GPU to GPU point-to-point operations on NVIDIA and AMD GPUs - Send, Recv - Alpha support for non-blocking GPU to GPU point-to-point operations on NVIDIA and AMD GPUs - Isend, Irecv - Tested with - Various HPC applications, mini-applications, and benchmarks - MPI4cuML (a custom cuML package with MPI support) - Tested with CUDA <= 11.6 and CUDA 12.0 - Tested with ROCM <= 5.6.0 For downloading MVAPICH-Plus 3.0a2 library and associated user guide, please visit the following URL: http://mvapich.cse.ohio-state.edu All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu). Thanks, The MVAPICH Team PS: We are also happy to inform that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,691,000 (1.69 million). The MVAPICH team would like to thank all its users and organizations!! From panda at cse.ohio-state.edu Thu Jul 27 23:22:08 2023 From: panda at cse.ohio-state.edu (Panda, Dhabaleswar) Date: Fri, 28 Jul 2023 03:22:08 +0000 Subject: [Mvapich-discuss] Online attendance for the MUG '23 conference is now free Message-ID: The MVAPICH User Group (MUG) conference organizers have put together an excellent program for the 11th annual MUG '23 conference. The conference will be held during August 21-23, 2023. It will be a hybrid event. The preliminary program is available from http://mug.mvapich.cse.ohio-state.edu/program/ Thanks to multiple organizations (Broadcom, Cornelis Networks, US-National Science Foundation (NSF), NVIDIA, Ohio Supercomputer Center, Ohio State University, Ohio State University/Translational Data Analytics Institute, ParaTools, and X-ScaleSolutions) for extending sponsorships to this conference!! These sponsorships have helped us to waive the registration fee for all online attendees (non-speakers). All interested parties (faculty, students, engineers, software developers, managers, etc.) can now attend the conference online for free. Please register for the conference using the registration link available from http://mug.mvapich.cse.ohio-state.edu/registration/ The Zoom link will be sent to all registered attendees a few days before the conference starts. If you have any questions, please send a note to mug at cse.ohio-state.edu. Thanks, The MUG '23 Organizers