[Mug-conf] Final Program for MUG '22 Conference is now available

Panda, Dhabaleswar panda at cse.ohio-state.edu
Fri Aug 19 00:49:16 EDT 2022


The final program for the 10th annual MVAPICH User Group (MUG) conference is now available from http://mug.mvapich.cse.ohio-state.edu/program/

Details of the conference include:

- Two Keynote Talks

  1. Cygnus-D: The Big Memory Supercomputer for HPC, Big Data and AI, Taisuke Boku from Univ. of Tsukuba (Japan)
  2. On the Horizon – Interconnects in Frontera and it’s coming replacement system, Dan Stanzione, TACC

- Eight Tutorials and Demos


  1.  OPX- A High-Performance libfabrics provider for Omni-Path Networks, Dennis Dalessandro, Cornelis Networks
  2.  Offloading Collective Operations to the BlueField DPU, Richard Graham, NVIDIA/Mella
  3.  A Tutorial on HPC and ML Communication Benchmarking, Moshe Voloshin, Broadcom
  4.  Accelerating HPC Applications with MVAPICH2-DPU and Live Demos, Donglai Dai and Kyle Schaefer, X-ScaleSolutions
  5.  Boosting Performance of HPC Applications with MVAPICH2, Hari Subramoni and Nat Shineman, The Ohio State University
  6.  Visualize, Analyze, and Correlate Networking Activities for Parallel Programs on InfiniBand and HPC Clusters using the OSU INAM Tool, Hari Subramoni and Pouya Kousha, The Ohio State University
  7.  High Performance Machine Learning and Deep Learning with MVAPICH2, Aamir Shafi and Arpan Jain, The Ohio State University
  8.   Benchmarking Parallel Python and Java Applications using OMB and MVAPICH2, Aamir Shafi and Nawras Alnaasan, The Ohio State University

- 18 Invited Talks

  1.   Overview of the MVAPICH Project and Future Roadmap, Dhabaleswar K (DK) Panda, The Ohio State University
  2.  Applying MPI to Manage HPC-scale Datasets, Adam Moody, Lawrence Livermore National Laboratory (LLNL)
  3.   Aggressive Asynchronous Communication in the MOOSE framework using MVAPICH2, Idaho National Laboratory (INL)
  4.   A Deep Dive into DPU Computing - Addressing HPC/AI Performance Bottlenecks, Gilad Shainer, NVIDIA
  5.   DMA Software Support for Broadcom Ethernet NICs, Hemal Shah, Broadcom
  6.   MVAPICH2 at Azure: Enabling High Performance on Cloud, Jithin Jose, Microsoft Azure
  7.   Cyberinfrastructure Research, Learning and Workforce Development (LWD) Programs at NSF, Ashok Srinivasan, NSF
  8.   Performance Engineering using MVAPICH and TAU, Sameer Shende, ParaTools and University of Oregon
  9.   Performance of Applications using MVAPICH2 and MVAPICH2-GDR on SDSC's Expanse Supercomputer, Mahidhar Tatineni, San Diego Supercomputer Center (SDSC)
10.  Introduction to Cornelis Networks and the Omni-Path Architecture, Douglas Fuller, Cornelis Networks
11.  Offloading MPI collectives to DPU in a real HPC application: the Xcompact3D proof-of-concept, Filippo Spiga, NVIDIA
12.  HPC platform efficiency for large-scale workloads, Martin Hilgerman, Dell
13.  MVAPICH at the Cambridge Open Zettascale Lab​, Christopher Edsall, University of Cambridge, UK
14.  FFT Computation towards Exascale, Alan Ayala and Stan Tomov, The University of Tennessee, Knoxville
15.  Solving MPI Integration problems with Spack, Greg Becker, Lawrence Livermore National Laboratory (LLNL)
16.  MVAPICH2 at NERSC, Shazeb Siddiqui (NERSC), Sameer Shende (ParaTools), and Prathmesh Sambrekar, NERSC
17.  Accelerating HPC and DL applications using MVAPICH2-DPU library and X-ScaleAI package, Donglai Dai, X-ScaleSolutions
18.  MPI4Spark: A High-Performance Communication Framework for Spark using MPI, Aamir Shafi, The Ohio State University

- 12 Student Poster Presentations

  1.  Jurdana Masuma Iqrah, University of Texas at San Antonio, Auto-labeling Sea Ice and Open Water Segmentation and Classification for Sentinel-2 Satellite Imagery in Polar Regions
  2.  Ahmad Hossein Yazdani, Virginia Polytechnic Institute and State University, Profiling User I/O Behavior for Leadership Scale HPC Systems
  3.  Jordi Alcaraz Rodriguez, University of Oregon, Performance Engineering using MVAPICH and TAU via the MPI Tools Interface
  4.  Buddhi Ashan, Mallika Kankanamalage, The University of Texas at San Antonio, Heterogeneous Parallel and Distributed Computing for Efficient Polygon Overlay Computation over Large Polygonal Datasets
  5.  Hasanul Mahmud, The University of Texas at San Antonio, Toward an Energy-efficient framework for DNN inference at the Edge
  6.  Sunyu Yao, Virginia Polytechnic Institute and State University, GenFaaS: Automated FaaSification of Monolithic Workflows
  7.  Yao Xu, Northeastern University, A Hybrid Two-Phase-Commit Algorithm in Checkpointing Collective Communications
  8. Christopher Holder, Florida State University, Layer 2 Scaling
  9. Nawras Alnaasan, The Ohio State University, OMB-Py: Python Micro-Benchmarks for Evaluating Performance of MPI Libraries and Machine Learning Applications on HPC System
10. Pouya Kousha, The Ohio State University, Cross-layer Visualization of Network Communication for HPC Clusters
11. Shulei Xu, The Ohio State University, HPC Meets Clouds: MPI Performance Characterization & Optimization on Emerging HPC Cloud System
12. Tu Tran, The Ohio State University, Designing Hierarchical Multi-HCA Aware Allgather in MPI

- 7 Short Talks (The MVAPICH group, The Ohio State University)

  1.  High Performance MPI over Slingshot, Kawthar Shafie Khorassani
  2.  Accelerating MPI All-to-All Communication with Online Compression on Modern GPU Clusters, Qinghua Zhou
  3.  “Hey CAI” - Conversational AI Enabled User Interface for HPC Tools, Pouya Kousha
  4.  Hybrid Five-Dimensional Parallel DNN Training for Out-of-core Models, Arpan Jain
  5.  Highly Efficient Alltoall and Alltoallv Communication Algorithms for GPU Systems , Chen-Chun Chen
  6.  Network Assisted Non-Contiguous Transfers for GPU-Aware MPI Libraries, Kaushik Kandadi Suresh
  7.  Towards Architecture-aware Hierarchical Communication Trees on Modern HPC Systems, Bharat Ramesh

The event is being held in a hybrid manner during August 22-24, 2022, in Columbus, Ohio, USA.

Interested in attending the conference? More information on registration (in-person and online), hotel accommodation, and travel is available from http://mug.mvapich.cse.ohio-state.edu/

Thanks,

The MVAPICH Team

The MUG conference is proud to be sponsored by Broadcom, Cornelis Networks, NSF, NVIDIA, Ohio Supercomputer Center, The Ohio State University, ParaTools, and X-ScaleSolutions.





More information about the Mug-conf mailing list