ECE Research Initiative
Zheng Zhang | Computational Data Science for Designing Emerging Hardware and AI Systems
Professor Zheng Zhang’s group investigates computational data science to solve modeling, verification, and optimization problems in electronic and photonic design automation under process variations, large-scale AI systems under various perturbations and attacks, and autonomous systems with environmental uncertainties and sensor noise. In these applications, ensuring robustness and safety is of high importance in practical design or operations. To address this issue, his group investigates the fundamental theory and algorithms of uncertainty quantification. Meanwhile, many of these system designs involve high dimensionality and high-volume of data. This motivates his group to investigate the second theoretical topic: tensor computation.
Data-efficient Design Modeling and Optimization Under Uncertainties
Nano-scale electronic and photonic IC chip fabrication suffers from increasing process variations such as random doping effect and surface roughness. Quantifying the impact of such uncertainties and optimizing the yield of chip design has been a automation (EDA) field. While there has been a trend of solving EDA problems with AI, directly applying AI to EDA problems may lead to bad performance. This is because modernized AI algorithms (e.g., deep learning) are designed based on a high volume of available training data, while obtaining data samples in electronic and photonic design automation is often expensive. In analog/mixed-signal design, each piece of data is obtained from a numerical simulation of a device- or circuit-level computational models, which are computationally expensive large-scale differential equations.
With the support of two NSF grants and one DOE award, Zhang’s group are investigating small-data techniques to develop variation-aware design automation algorithms. In this direction, their research efforts include: (1) stochastic spectral methods for forward uncertainty quantification under non-Gaussian correlated uncertainties; (2) overcoming the curse of dimensionality by tensor computation; (3) ensuring performance-yield co-optimization by developing rigorous chance-constraint optimization algorithms; (4) accurate Bayesian inference algorithms inspired by the models from statistical physics and quantum physics. His former postdoc Chunfeng Cui and PhD student Zichang He achieved best journal paper award once and best conference paper awards twice in this area.
Using tensor compression in the training process, they significantly reduce the training variables and offer a promising solution for training AI models on energy efficient edge devices.
Hardware-friendly, Energy-efficient and Trustworthy AI systems
While deep neural networks have achieved great performance in many engineering applications, their deployment has been limited by some fundamental challenges. Firstly, the huge AI models consume lots of computing resources and energy in the training and inference stage. Secondly, deep neural networks often have a significant accuracy drop when the input data is perturbed only a little bit due to the black-box nature. To overcome these two challenges, Zhang’s group is developing computational methods and algorithm/hardware co-design to improve the training efficiency and robustness/ safety of deep learning models.
With the support of an NSF grant and in collaboration with Facebook, his PhD student Cole Hawkins has developed a highly efficient tensor optimization framework for training deep neural networks with orders-of-magnitude reduction of memory and energy cost. The key idea is fundamentally different from the mainstream techniques of pruning or quantization that only offer reduction around one order of magnitude. Using tensor compression in the training process, they significantly reduce the training variables and offer a promising solution for training AI models on energy efficient edge devices. Several leading AI and semiconductor companies have shown great interests in this research direction.
In order to improve the robustness and safety of deep learning models, his group is investigating the theoretical and algorithm issues from the perspectives of uncertainty quantification and dynamic systems. Specifically, his former postdoc Chunfeng Cui has shown that the theoretical tools from uncertainty quantification can be used to detect the strongest possible universal attack to a neural network. In collaboration with National University of Singapore, his student Zhuotong Chen developed a closed-loop control method to detect and fix the possible errors caused by various uncertainties and attacks in the input data. Their proposed method connects trajectory optimization of dynamic systems with neural network robustness, providing a powerful defense tool that can handle many kinds of attacks unseen in the training process.
Beyond the Disciplinary Boundaries
Professor Zhang’s group highly appreciates the value of interdisciplinary research. The technical background of his group members are very diverse: their undergraduate and PhD degrees are from ECE, CS and applied math. Such a diverse background is a key enabler to the success of interdisciplinary projects.
Beyond the above-mentioned fields, Zhang’s group is also investigating the following emerging topics beyond conventional computing: (1) quantum computing and probabilistic computing to ensure safe machine learning (in collaboration with Professor Kerem Camsari); (2) tensor computation for quantum AI and quantum design automation (in collaboration with Professor Galan Moody and with industries).