May 24 (Wed) @ 1:00pm: "Overparameterization in Neural Networks: from application to theory," Kaiqi Zhang, ECE PhD Defense

Date and Time
Location
Harold Frank Hall (HFH), Rm 4164 (ECE Conf. Rm)

Abstract

Neural networks are rapidly increasing in size, leading to a common occurrence of overparameterization in deep learning. This presents challenges in both the theory and application of deep learning. From a theoretical standpoint, it remains an open question as to why neural networks generalize well despite overparameterization. From an application perspective, overparameterization leads to significant computation and storage costs, which limits the practical application of deep neural networks.

To address both of these issues, this defense aims to solve these challenges. Regarding application, I propose training a low-rank tensorized neural network to compress the model and reduce computation costs during both training and inference. From a theoretical perspective, I examine the effect of weight decay on neural networks and demonstrate that it induces sparsity in a parallel neural network. This finding proves that neural networks are locally adaptive and can provably outperform a large class of traditional methods. My goal is to provide both practical and theoretical solutions to the challenges presented by overparameterization in deep learning. 

Bio

Kaiqi Zhang is a Ph.D. candidate in the Department of Electrical and Computer Engineering (ECE) at the University of California, Santa Barbara, working with Prof. Yu-Xiang Wang. His research interests include extending traditional statistics machine learning theory to study the effects of overparameterization in deep neural networks. He received his B.S. degrees in Electronic Engineering from Tsinghua University in 2016, and the M.S. degree in ECE from UC Davis in 2018.

Hosted by: Professor Yu-Xiang Wang

Submitted by: Kaiqi Zhang <kzhang70@ucsb.edu>