<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72" style="word-wrap:break-word">
<div class="WordSection1">
<p class="MsoNormal">The MVAPICH team is pleased to announce the release of MVAPICH2 2.3.6 GA.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Features and enhancements for MVAPICH2 2.3.6 GA are as follows:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">* Features and Enhancements (since 2.3.5):<o:p></o:p></p>
<p class="MsoNormal"> - Support collective offload using Mellanox's SHARP for Reduce and Bcast<o:p></o:p></p>
<p class="MsoNormal"> - Enhanced tuning framework for Reduce and Bcast using SHARP<o:p></o:p></p>
<p class="MsoNormal"> - Enhanced performance for UD-Hybrid code<o:p></o:p></p>
<p class="MsoNormal"> - Add multi-rail support for UD-Hybrid code<o:p></o:p></p>
<p class="MsoNormal"> - Enhanced performance for shared-memory collectives<o:p></o:p></p>
<p class="MsoNormal"> - Enhanced job-startup performance for flux job launcher<o:p></o:p></p>
<p class="MsoNormal"> - Add support in mpirun_rsh to use srun daemons to launch jobs<o:p></o:p></p>
<p class="MsoNormal"> - Add support in mpirun_rsh to specify processes per node using<o:p></o:p></p>
<p class="MsoNormal"> '-ppn' option<o:p></o:p></p>
<p class="MsoNormal"> - Use PMI2 by default when SLURM is selected as process manager<o:p></o:p></p>
<p class="MsoNormal"> - Add support to use aligned memory allocations for multi-threaded<o:p></o:p></p>
<p class="MsoNormal"> applications<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Evan J. Danish @OSC for the report<o:p></o:p></p>
<p class="MsoNormal"> - Architecture detection and enhanced point-to-point tuning for<o:p></o:p></p>
<p class="MsoNormal"> Oracle BM.HPC2 cloud shape<o:p></o:p></p>
<p class="MsoNormal"> - Enhanced collective tuning for Frontera@TACC and Expanse@SDSC<o:p></o:p></p>
<p class="MsoNormal"> - Add support for GCC compiler v11<o:p></o:p></p>
<p class="MsoNormal"> - Add support for Intel IFX compiler<o:p></o:p></p>
<p class="MsoNormal"> - Update hwloc v1 code to v1.11.14<o:p></o:p></p>
<p class="MsoNormal"> - Update hwloc v2 code to v2.4.2<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">* Bug Fixes (since 2.3.5):<o:p></o:p></p>
<p class="MsoNormal"> - Updates to IME support in MVAPICH2<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Bernd Schubert and Jean-Yves Vet @DDN<o:p></o:p></p>
<p class="MsoNormal"> for the patch<o:p></o:p></p>
<p class="MsoNormal"> - Improve error reporting in dlopen code path<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Matthew W. Anderson @INL for the report<o:p></o:p></p>
<p class="MsoNormal"> - Fix memory leak in collectives code path<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Matthew W. Anderson @INL and the PETSc<o:p></o:p></p>
<p class="MsoNormal"> team for the report and patch<o:p></o:p></p>
<p class="MsoNormal"> - Fix issues in DPM code<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Lana Deere @D2S Inc for the report<o:p></o:p></p>
<p class="MsoNormal"> - Fix issues when using sys_siglist array<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Jorge D'Elia @Universidad Nacional Del Litoral<o:p></o:p></p>
<p class="MsoNormal"> in Santa Fe, Argentina for the report<o:p></o:p></p>
<p class="MsoNormal"> - Fix issues with GCC v11<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Honggang Li @RedHat for the report<o:p></o:p></p>
<p class="MsoNormal"> - Fix issues in Win_shared_alloc<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Adam Moody @LLNL for the report<o:p></o:p></p>
<p class="MsoNormal"> - Fix issues with HDF5 in ROMIO code<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Mark Dixon @Durham University for the report<o:p></o:p></p>
<p class="MsoNormal"> - Fix issues with srun based launch when SLURM hostfile is specified<o:p></o:p></p>
<p class="MsoNormal"> manually<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Greg Lee @LLNL for the report<o:p></o:p></p>
<p class="MsoNormal"> - Fix issues in UD-Hybrid code path<o:p></o:p></p>
<p class="MsoNormal"> - Fix issues in MPI_Win_test leading to hangs in multi-rail scenarios<o:p></o:p></p>
<p class="MsoNormal"> - Fix issues in job startup code leading to degraded startup performance<o:p></o:p></p>
<p class="MsoNormal"> - Update code to work with any number of HCAs in a graceful fashion<o:p></o:p></p>
<p class="MsoNormal"> - Fix hang in shared memory code with stencil applications<o:p></o:p></p>
<p class="MsoNormal"> - Fix segmentation fault in finalize<o:p></o:p></p>
<p class="MsoNormal"> - Fix compilation warnings, memory leaks, and spelling mistakes<o:p></o:p></p>
<p class="MsoNormal"> - Fix an issue with external32 datatypes being converted incorrectly<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Adam Moody @LLNL for the report<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">The new features, enhancements, and bug fixes for OSU Micro-Benchmarks (OMB)<o:p></o:p></p>
<p class="MsoNormal">5.7.1 are listed here<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">* New Features & Enhancements (since v5.7)<o:p></o:p></p>
<p class="MsoNormal"> - Enhance support for CUDA managed memory benchmarks<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Ian Karlin and Nathan Hanford @LLNL for the feedback<o:p></o:p></p>
<p class="MsoNormal"> - Add support to send and receive data from different buffers for<o:p></o:p></p>
<p class="MsoNormal"> osu_latency, osu_bw, osu_bibw, and osu_mbw_mr<o:p></o:p></p>
<p class="MsoNormal"> - Add support to print minimum and maximum communication times for<o:p></o:p></p>
<p class="MsoNormal"> non-blocking benchmarks<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">* Bug Fixes (since v5.7)<o:p></o:p></p>
<p class="MsoNormal"> - Update README file with updated description for osu_latency_mp<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Honggang Li @RedHat for the suggestion<o:p></o:p></p>
<p class="MsoNormal"> - Fix error in setting benchmark name in osu_allgatherv.c and<o:p></o:p></p>
<p class="MsoNormal"> osu_allgatherv.c<o:p></o:p></p>
<p class="MsoNormal"> - Thanks to Brandon Cook @LBL for the report<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">For downloading MVAPICH2-GDR 2.3.6 GA, OMB 5.7.1, and associated user guides,<o:p></o:p></p>
<p class="MsoNormal">please visit the following URL:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><a href="http://mvapich.cse.ohio-state.edu">http://mvapich.cse.ohio-state.edu</a><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">All questions, feedback, bug reports, hints for performance tuning, patches and<o:p></o:p></p>
<p class="MsoNormal">enhancements are welcome. Please post it to the mvapich-discuss mailing list<o:p></o:p></p>
<p class="MsoNormal">(<a href="mailto:mvapich-discuss@lists.osu.edu">mvapich-discuss@lists.osu.edu</a>).<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Thanks,<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">The MVAPICH Team<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">PS: We are also happy to inform that the number of organizations using MVAPICH2<o:p></o:p></p>
<p class="MsoNormal">libraries (and registered at the MVAPICH site) has crossed 3,150 worldwide (in<o:p></o:p></p>
<p class="MsoNormal">89 countries). The number of downloads from the MVAPICH site has crossed<o:p></o:p></p>
<p class="MsoNormal">1,363,000 (1.36 million). The MVAPICH team would like to thank all its users<o:p></o:p></p>
<p class="MsoNormal">and organizations!!<o:p></o:p></p>
</div>
</body>
</html>