The allreduce operation is one of the most commonly used communication routines in distributed applications. In this operation, vectors coming from different nodes need to be aggregated element-wise (e.g., by summing elements), and the result distributed back to all the nodes. Estimations show up to 40% of the time spent in the training of large-scale machine learnign models is spent performing this operation [1]. To improve its bandwidth and the performance of applications using it, this operation can be accelerated by offloading it to network switches. Instead of sending data back-and-forth between nodes, data is aggregated directly inside the network switches. It has been shown that by doing so, performance of the allreduce can be improved by up to 2x.
In this thesis the student will design more efficient in-network allreduce solutions, and implement/evaluate them on a simulator (to be evaluated together, some possibilities are: NS-3 [2], htsim [3], AstraSim [4]). The thesis will be tailored on the student’s expertise, skills, and preferences.
Numerous microarchitectural optimizations unlocked tremendous processing power for deep neural networks that in turn fueled the AI revolution. With the exhaustion of such optimizations, the growth of modern AI is now gated by the performance of training systems, especially their data movement. Instead of focusing on single accelerators, we investigate data-movement characteristics of large-scale training at full system scale. Based on our workload analysis, we design HammingMesh, a novel network topology that provides high bandwidth at low cost with high job scheduling flexibility. Specifically, HammingMesh can support full bandwidth and isolation to deep learning training jobs with two dimensions of parallelism. Furthermore, it also supports high global bandwidth for generic traffic. Thus, HammingMesh will power future large-scale deep learning systems with extreme bandwidth requirements.
@inproceedings{hxmesh,author={Hoefler, Torsten and Bonato, Tommaso and De Sensi, Daniele and Di Girolamo, Salvatore and Li, Shigang and Heddes, Marco and Belk, Jon and Goel, Deepak and Castro, Miguel and Scott, Steve},title={HammingMesh: A Network Topology for Large-Scale Deep Learning},year={2022},month=nov,booktitle={Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC'22)},award={Best Reproducibility Advancement Award},doi={10.1109/sc41404.2022.00016},eprint={2209.01346},dimensions={true},}
The allreduce operation is one of the most commonly used communication routines in distributed applications. To improve its bandwidth and to reduce network traffic, this operation can be accelerated by offloading it to network switches, that aggregate the data received from the hosts, and send them back the aggregated result. However, existing solutions provide limited customization opportunities and might provide suboptimal performance when dealing with custom operators and data types, with sparse data, or when reproducibility of the aggregation is a concern. To deal with these problems, in this work we design a flexible programmable switch by using as a building block PsPIN, a RISC-V architecture implementing the sPIN programming model. We then design, model, and analyze different algorithms for executing the aggregation on this architecture, showing performance improvements compared to state-of-the-art approaches.
@inproceedings{flare,author={De Sensi, Daniele and Di Girolamo, Salvatore and Ashkboos, Saleh and Li, Shigang and Hoefler, Torsten},title={Flare: Flexible in-Network Allreduce},year={2021},isbn={9781450384421},publisher={Association for Computing Machinery},address={New York, NY, USA},url={https://doi.org/10.1145/3458817.3476178},doi={10.1145/3458817.3476178},booktitle={Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis},articleno={35},numpages={16},keywords={allreduce, programmable switch, in-network computing},location={St. Louis, Missouri},series={SC '21},dimensions={true},}