Aug 24 (Wed) @ 10:00am: "Structural Defense Techniques in Adversarial Machine Learning," Can Bakiskan, ECE PhD Defense

Date and Time
Location
Zoom Meeting – Meeting ID: 819 8306 0102 | Passcode: 716060

https://ucsb.zoom.us/j/81983060102?pwd=WjR0T1RBcVJ4bERMN214bm02V1lNUT09

Abstract

Deep Neural Networks (DNNs) have found applications in an ever-increasing array of fields in the past decade. With the widespread adoption of DNNs came worries about their security and reliability. Specifically, naively trained DNNs have been shown to be vulnerable to carefully constructed perturbations that make the DNNs fail at their task. To combat these, many defense mechanisms proposed so far, among which adversarial training and its variants stood the test of time. While adversarial training of DNNs yields state of the art empirical performance, it does not provide insight into, or explicit control over, the features being extracted by the network layers. In this work, we tackle this issue by incorporating bottom-up structural blocks principles into DNNs with the aim of extracting explainable features and providing robustness in a principled manner. Specifically, we use guiding principles from signal processing, sparse representation theory and neuroscience to design network components to incorporate robust features into neural networks.

We begin by presenting an analysis of adversarial training that motivates and justifies further research into the earlier layers of neural networks. We then focus our attention to front-end based techniques. In one technique, we use a nonlinear front end which polarizes and quantizes the data to increase robustness. In another, we use ideas from sparse coding theory to construct an encoder that uses a sparse overcomplete dictionary, lateral inhibition and drastic nonlinearity, characteristics commonly observed in biological vision, to reduce the effects of adversarial perturbations. Finally, we introduce a promising neuro-inspired approach to DNNs with sparser and stronger activations, trained with the aid of Hebbian rule of learning. Experiments demonstrate that, relative to baseline end-to-end trained architectures, our proposed architecture leads to sparser activations, exhibits more robustness to noise, and more robustness to adversarial perturbations without adversarial training.

Bio

Can Bakiskan received his B.S. degree in Electrical and Electronics Engineering with a double major in Mathematics from Bogazici University, Istanbul, in 2017 and an M.S degree in Electrical and Computer Engineering from the University of California, Santa Barbara, in 2019. He is currently a Ph.D. candidate at the Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA, USA. His research interests include deep neural networks, adversarial machine learning and signal processing.

Hosted by: Professor Upamanyu Madhow

Submitted by: Can Bakiskan <canbakiskan@ucsb.edu>