Aug 18 (Wed) @ 2pm: "Robust and Efficient Algorithms for Federated Learning and Distributed Computing," Amirhossein Reisizadeh, ECE PhD Defense

Date and Time
Location
Zoom Meeting – Meeting ID: 927 052 0240 | Passcode: 08182021

https://ucsb.zoom.us/j/9270520240?pwd=Q0RuSVdpS3k5L1VralNuWUdWZHE5dz09

Abstract

Training a large-scale model over a massive data set is an extremely computation and storage intensive task, e.g. training ResNet with hundreds of millions of parameters over the data set ImageNet with millions of images. As a result, there has been significant interest in developing distributed learning strategies that speed up the training of learning models. Due to the growing computational power of the ecosystem of billions of mobile and computing devices, many future distributed learning systems operate based on storing data locally and pushing computation to the network edge. Unlike traditional centralized machine learning environments, however, machine learning at the edge is characterized by significant challenges including (1) scalability due to severe constraints on communication bandwidth and other resources including storage and energy, (2) robustness to stragglers, and edge failures due to slow edge nodes, (3) models generalizing to non-i.i.d. and heterogeneous data.

In this thesis, we focus on two important distributed learning frameworks: Federated Learning and Distributed Computing, with a shared goal in mind: how to provably address the critical challenges in such paradigms using novel techniques from distributed optimization, statistical learning theory, probability theory, and communication and coding theory to advance the state-of-the-art in efficiency, resiliency, and scalability.

In the first part of the thesis, we devise three methods to mitigate communication cost, straggler resiliency and robustness to heterogeneous data in federated learning paradigms. Our main ideas are to employ model compression, adaptive device participation and distributionally robust minimax optimization, respectively for such challenges. We characterize provable improvements for the proposed algorithms in terms of convergence speed, expected runtime, and generalization gaps.

Moving on to the second part, we consider important instances of distributed computing frameworks such as distributed gradient aggregation, matrix-vector multiplication and MapReduce-type computing tasks and propose several algorithms to mitigate the aforementioned bottlenecks in these settings. The key idea in our designs is to introduce redundant and coded computation in an elaborate fashion in order to benefit in communication cost and the total runtime.  We also support our theoretical results in both parts by significant improvements in numerical experiments.

Bio

Amirhossein Reisizadeh received his M.Sc. degree from University of California, Los Angeles (UCLA) in 2016 and a B.Sc. degree from Sharif University of Technology, Tehran, Iran in 2014 both in Electrical Engineering. He is currently pursuing his Ph.D. in  Electrical and Computer Engineering at University of California, Santa Barbara (UCSB).

Hosted by: Professor Ramtin Pedarsani

Submitted by: Amirhossein Reisizadeh <amir.reisizadeh@gmail.com>