Mar 14 (Mon) @ 1:00pm: “Improving Reinforcement Learning for Robotics with Control and Dynamical Systems Theory,” Sean Gillen, ECE PhD Defense

Date and Time
Location
Harold Frank Hall (HFH), Rm. 4164 (ECE Conf. Rm.)

Zoom Meetinghttps://ucsb.zoom.us/j/3301117099

Abstract

Recent advances in machine learning, simulation, algorithm design, and computer hardware have allowed reinforcement learning (RL) to become a powerful tool that can solve a variety of challenging problems that have been difficult impossible to solve with other approaches. One of the most promising applications of RL is robotic control, in which researchers have demonstrated success on a number of challenging tasks, from rough terrain locomotion to complex object manipulation. Despite this, there remain many limitations that prevent RL from seeing wider adoption. Among these are a lack of any stability or robustness guarantees, and a lack of any way to incorporate domain knowledge into RL algorithms.

In this thesis we address these limitations by leveraging insights from other fields. We show that a model based local controller can be combined with a learned policy to solve a difficult nonlinear control problem that modern RL struggles with. In addition, we show that gradients in new, differentiable simulators can be leveraged by RL algorithms to better control the same class of nonlinear systems. 

We also build on prior work that approximates dynamical systems as discrete Markov chains. This representation allow us to analyze stability and robustness properties of a system. We show that we can modify RL reward functions to encourage locomotion policies that have a smaller Markov chain representation, allowing us to expand the scope of systems that this type of analysis can be applied to. We then use a hopping robotic system as a case study for this type of analysis. Finally, we show the same tools that can shrink the Markov chain size can also be used for more generic fine tuning of RL policies, improving performance and consistency of learned policies across a wide range of benchmarking tasks. 

Bio

Sean Gillen obtained a bachelors in Electrical Engineering from the University of Maryland, College Park in 2017. The same year, he started his PhD work at UCSB in Professor Katie Byl's Lab, conducting research into reinforcement learning for the control of under actuated robotic systems. In his free time he enjoys hikes, bikes, games, and books. 

Hosted by: Katie Byl

Submitted by: Sean Gillen <sgillen@ucsb.edu>