Aug 1 (Mon): "Energy-Efficient Architecture and Dataflow Optimization for Spiking Neural Network (SNN) Accelerators," Jeong-Jun Lee, ECE PhD Defense

Date and Time
Location
Zoom Meeting – Meeting ID: 863 1367 9456

https://ucsb.zoom.us/j/86313679456

Abstract

SNNs offer a promising biologically-plausible computing model and lend themselves to ultra-low-power event-driven processing on neuromorphic processors. Especially, compared with conventional artificial neural networks, SNNs are well-suited for processing complex spatiotemporal data. However, complex spatial and temporal dynamics in SNNs pose a significant overhead in accelerating neural computations and limit the computing capabilities of neuromorphic accelerators. In this dissertation, we aim to address key difficulties in accelerating SNNs: developing bio-plausible and hardware-friendly algorithm, efficient processing of the added temporal dimension, and handling unstructured sparsity emergent in both space and time.

We present the first study on realizing competitive spike-train level backpropagation (BP)-like algorithm to enable efficient on-chip training of SNNs. Algorithm and hardware co-optimization and efficient online neural signal computation are explored for on-chip implementation of ST-DFA. Also, we propose holistic reconfigurable dataflow optimization for systolic array acceleration of spiking neural nets. A novel scheme is introduced for a parallel acceleration of computation across multiple time points, which further allows for systemic optimization of variable tiling for large performance and efficiency gains. To further develop this, we pack multiple time points into a single time window (TW) and process the computations induced by active synaptic inputs falling under several TWs in parallel. Lastly, we propose a novel technique and architecture that allow the exploitation of temporal information compression with structured sparsity and parallelism across time. We split a full range of temporal domain into several time windows (TWs) where a TW packs multiple time points, and encode the temporal information in each TW with Split-Time Temporal coding (STT) by limiting the number of spikes within a TW up to one. STT enables sparsification and structurization of irregular firing activities and dramatically reduces computational overhead while delivering competitive classification accuracy without a huge drop.

Bio

Jeongjun Lee received a B.S. degree and M.S. degree in electrical and computer engineering from the Seoul National University, in 2016 and 2018, respectively. He is currently a Ph.D. candidate in computer engineering from the University of California at Santa Barbara. His research interests span hardware-friendly learning algorithms, dataflow/architecture optimizations, and information compression for spiking neural networks.

Hosted by: Professor Peng Li

Submitted by: Jeong-Jun Lee <jeong-jun@ucsb.edu>