From shterenlikht at par-tec.com Mon Oct 2 03:35:38 2023 From: shterenlikht at par-tec.com (Shterenlikht, Anton) Date: Mon, 2 Oct 2023 07:35:38 +0000 Subject: [Mvapich-discuss] build with pmix fails with - fatal error: pmi.h: No such file or directory In-Reply-To: References: <3A248B5A-453B-485F-B48D-4222111CF50D@par-tec.com> Message-ID: Hi Nat Any progress? Anton > On 12 Sep 2023, at 14:41, Shineman, Nat wrote: > > Hi Anton, > > Thanks for reporting this issue. Looks like this is an issue with how we handle slurm's pmix installation. We will take a look at this and get back to you with an update. > > Thanks, > Nat > From: Mvapich-discuss on behalf of Shterenlikht, Anton via Mvapich-discuss > Sent: Friday, September 8, 2023 08:29 > To: mvapich-discuss at lists.osu.edu > Subject: [Mvapich-discuss] build with pmix fails with - fatal error: pmi.h: No such file or directory > Hello > > I configure mvapich 3.0b with: > > --with-pm=slurm \ > --with-pmi=pmix \ > --with-pmix= \ > --with-hwloc= > > and get: > > ./src/include/upmi.h:20:10: fatal error: pmi.h: No such file or directory > > pmix package does not install "pmi.h", only "pmix.h" > > Why is pmi.h needed in upmi.h? > > 18 #ifdef USE_PMIX_API > 19 #include > 20 #include > > > And if pmi.h is really needed when using pmix, > why not check for it at configure stage? > > I see only these checks: > > 1164 checking pmix.h usability... yes > 1165 checking pmix.h presence... yes > 1166 checking for pmix.h... yes > 1167 checking for PMIx_Init in -lpmix... yes > > > How can I build mvapich with pmix? > > Thank you > > Anton -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4390 bytes Desc: not available URL: From subramoni.1 at osu.edu Tue Oct 31 10:42:15 2023 From: subramoni.1 at osu.edu (Subramoni, Hari) Date: Tue, 31 Oct 2023 14:42:15 +0000 Subject: [Mvapich-discuss] Announcing the release of OSU Micro-Benchmarks (OMB) 7.3 Message-ID: The MVAPICH team is pleased to announce the release of OSU Micro-Benchmarks (OMB) 7.3. This release introduces support for: * AMD RCCL-based benchmarks for point-to-point and collective operations. * Benchmarks to evaluate the performance of MPI persistent collectives for various collective operations. * New metrics to evaluate benchmark performance. We would also like to acknowledge and thank our community of users for their contributions to the OMB project in terms of bug reports and patches! Please note that OMB is also available through the Spack package manager. Now the system administrators and users of OSU Micro-Benchmarks (OMB) will be able to install these libraries on their systems using Spack. The new features, enhancements, and bug fixes for OSU Micro-Benchmarks (OMB) 7.3 are listed here: * New Features & Enhancements - Add support for RCCL benchmarks * Thanks to Marcel Koch @KIT for the initial patch * Point-to-point benchmarks supported * osu_xccl_bibw, osu_xccl_bw, osu_xccl_latency * Collective benchmarks supported * osu_xccl_allgather, osu_xccl_allreduce, osu_xccl_alltoall, * osu_xccl_bcast, osu_xccl_reduce, osu_xccl_reduce_scatter - Add new benchmarks for persistent collectives * osu_allgather_persistent, osu_allgatherv_persistent, * osu_allreduce_persistent, osu_alltoall_persistent, * osu_alltoallv_persistent, osu_alltoallw_persistent, * osu_barrier_persistent, osu_bcast_persistent, * osu_gather_persistent, osu_gatherv_persistent, * osu_reduce_persistent, osu_reduce_scatter_persistent, * osu_scatter_persistent, osu_scatterv_persistent - Support new metrics to evaluate benchmark performance * 50th percentile tail latency/bandwidth * 95th percentile tail latency/bandwidth * 99th percentile tail latency/bandwidth * Bug Fixes - Fixed acknowledgement buffer memory allocation issue in bandwidth related benchmarks * Thanks to Emmanuel BRELLE @Eviden for report and patch. - Fixed validation issue in osu_fop_latency. * Thanks to Coey Minear @HPE for report and patch. - Added support to managed buffers in one-sided collectives and one-sided point-to-point benchmarks For downloading OMB 7.3 and associated README instructions, please visit the following URL: http://mvapich.cse.ohio-state.edu All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu). Thanks, The MVAPICH Team PS: We are also happy to inform you that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,731,000 (1.73 million). The MVAPICH team would like to thank all its users and organizations!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From shterenlikht at par-tec.com Mon Oct 2 03:35:38 2023 From: shterenlikht at par-tec.com (Shterenlikht, Anton) Date: Mon, 2 Oct 2023 07:35:38 +0000 Subject: [Mvapich-discuss] build with pmix fails with - fatal error: pmi.h: No such file or directory In-Reply-To: References: <3A248B5A-453B-485F-B48D-4222111CF50D@par-tec.com> Message-ID: Hi Nat Any progress? Anton > On 12 Sep 2023, at 14:41, Shineman, Nat wrote: > > Hi Anton, > > Thanks for reporting this issue. Looks like this is an issue with how we handle slurm's pmix installation. We will take a look at this and get back to you with an update. > > Thanks, > Nat > From: Mvapich-discuss on behalf of Shterenlikht, Anton via Mvapich-discuss > Sent: Friday, September 8, 2023 08:29 > To: mvapich-discuss at lists.osu.edu > Subject: [Mvapich-discuss] build with pmix fails with - fatal error: pmi.h: No such file or directory > Hello > > I configure mvapich 3.0b with: > > --with-pm=slurm \ > --with-pmi=pmix \ > --with-pmix= \ > --with-hwloc= > > and get: > > ./src/include/upmi.h:20:10: fatal error: pmi.h: No such file or directory > > pmix package does not install "pmi.h", only "pmix.h" > > Why is pmi.h needed in upmi.h? > > 18 #ifdef USE_PMIX_API > 19 #include > 20 #include > > > And if pmi.h is really needed when using pmix, > why not check for it at configure stage? > > I see only these checks: > > 1164 checking pmix.h usability... yes > 1165 checking pmix.h presence... yes > 1166 checking for pmix.h... yes > 1167 checking for PMIx_Init in -lpmix... yes > > > How can I build mvapich with pmix? > > Thank you > > Anton -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4390 bytes Desc: not available URL: From subramoni.1 at osu.edu Tue Oct 31 10:42:15 2023 From: subramoni.1 at osu.edu (Subramoni, Hari) Date: Tue, 31 Oct 2023 14:42:15 +0000 Subject: [Mvapich-discuss] Announcing the release of OSU Micro-Benchmarks (OMB) 7.3 Message-ID: The MVAPICH team is pleased to announce the release of OSU Micro-Benchmarks (OMB) 7.3. This release introduces support for: * AMD RCCL-based benchmarks for point-to-point and collective operations. * Benchmarks to evaluate the performance of MPI persistent collectives for various collective operations. * New metrics to evaluate benchmark performance. We would also like to acknowledge and thank our community of users for their contributions to the OMB project in terms of bug reports and patches! Please note that OMB is also available through the Spack package manager. Now the system administrators and users of OSU Micro-Benchmarks (OMB) will be able to install these libraries on their systems using Spack. The new features, enhancements, and bug fixes for OSU Micro-Benchmarks (OMB) 7.3 are listed here: * New Features & Enhancements - Add support for RCCL benchmarks * Thanks to Marcel Koch @KIT for the initial patch * Point-to-point benchmarks supported * osu_xccl_bibw, osu_xccl_bw, osu_xccl_latency * Collective benchmarks supported * osu_xccl_allgather, osu_xccl_allreduce, osu_xccl_alltoall, * osu_xccl_bcast, osu_xccl_reduce, osu_xccl_reduce_scatter - Add new benchmarks for persistent collectives * osu_allgather_persistent, osu_allgatherv_persistent, * osu_allreduce_persistent, osu_alltoall_persistent, * osu_alltoallv_persistent, osu_alltoallw_persistent, * osu_barrier_persistent, osu_bcast_persistent, * osu_gather_persistent, osu_gatherv_persistent, * osu_reduce_persistent, osu_reduce_scatter_persistent, * osu_scatter_persistent, osu_scatterv_persistent - Support new metrics to evaluate benchmark performance * 50th percentile tail latency/bandwidth * 95th percentile tail latency/bandwidth * 99th percentile tail latency/bandwidth * Bug Fixes - Fixed acknowledgement buffer memory allocation issue in bandwidth related benchmarks * Thanks to Emmanuel BRELLE @Eviden for report and patch. - Fixed validation issue in osu_fop_latency. * Thanks to Coey Minear @HPE for report and patch. - Added support to managed buffers in one-sided collectives and one-sided point-to-point benchmarks For downloading OMB 7.3 and associated README instructions, please visit the following URL: http://mvapich.cse.ohio-state.edu All questions, feedback, bug reports, hints for performance tuning, patches, and enhancements are welcome. Please post it to the mvapich-discuss mailing list (mvapich-discuss at lists.osu.edu). Thanks, The MVAPICH Team PS: We are also happy to inform you that the number of organizations using MVAPICH2 libraries (and registered at the MVAPICH site) has crossed 3,325 worldwide (in 90 countries). The number of downloads from the MVAPICH site has crossed 1,731,000 (1.73 million). The MVAPICH team would like to thank all its users and organizations!! -------------- next part -------------- An HTML attachment was scrubbed... URL: