➕ Implementation of the Swing allreduce algorithm in OpenMPI

The allreduce operation is one of the most commonly used communication routines in distributed applications. In this operation, vectors coming from different nodes need to be aggregated element-wise (e.g., by summing elements), and the result distributed back to all the nodes. Estimations show up to 40% of the time spent in the training of large-scale machine learning models is spent performing this operation [1]. To improve its bandwidth and the performance of applications using it, the Swing algorithm has been recently proposed [2], which can improve the performance of the allreduce by up to 3x.

In this thesis the student will implement the Swing algorithm in OpenMPI [3], one of the most commonly used communication libraries for distributed applications. The implementation will then be evaluated through a set of experiments to be performed on a 50-nodes cluster, as well as on the LUMI and Leonardo supercomputers (currently the 5th and 6th fastest supercomputers in the world). A good knowledge of the C programming language is required. If time allows, the algorithm could also be integrated into PyTorch and/or TensorFlow and evaluated on a set of machine learning applications.

[1] Weiyang Wang et al. “TopoOpt: Co-optimizing Network Topology and Parallelization Strategy for Distributed Training Jobs”
[2] Swing: Short-cutting Rings for Higher Bandwidth Allreduce - Daniele De Sensi, Tommaso Bonato, David Saam, Torsten Hoefler
[3] OpenMPI

Approximate composition: 10% State of the art analysis, 10% Theory/Design, 80% Implementation/Experiments

References

2024

  1. Swing: Short-cutting Rings for Higher Bandwidth Allreduce
    Daniele De Sensi, Tommaso Bonato, David Saam, and 1 more author
    In 21th USENIX Symposium on Networked Systems Design and Implementation (NSDI 24), Apr 2024