Kyriakos G. Vamvoudakis - Research Projects

Our research draws from the areas of control theory, game theory and computational intelligence. Our recent research interests lie in the design of robust and secure multi-agent networked systems, such as smart grids, and unmanned aerial and ground vehicles. Below is a brief description of some recent research projects.

Model-Free Optimization-Based Systems for Heterogeneous Teams of Humans and Manned/Unmanned Vehicles

Collaborators: J. P. Hespanha (UCSB)

 

Large complex networks that model the interactions between humans and manned/unmanned vehicles, are subject to exhaustive modeling, rely on specific network structure and offline computations and are fragile to intentional attacks and purposeful removals of important nodes, hence robustness to uncertainties, random attacks and random failures is an important aspect. There is a need to draw inspiration from recent neuro-physiological studies of the perception mechanism of the human brain and the processing pathways of the visual cortex.

We are inspired by the prefrontal cortex and basal ganglia of the human brain and combine interdisciplinary ideas from different fields, i.e. computational intelligence, game theory, control theory, and information theory to develop new self-configuring algorithms for decision and control given the unavailability of model, the presence of enemy components and the possibility of measurement and jamming network attacks. For more details see our published work.

Optimal Event-Triggered Embedded Control over Networks

Collaborators: J. P. Hespanha (UCSB)

 

Shared congestion and energy saving objectives demand that every information through a network should be rigorously decided when to transmit. Event-triggered control design is a newly developed framework that can potentially have a significant impact in applications with limited resources and controller bandwidth and offers a new point of view, with respect to conventional time-driven strategies.

Our work develops optimization-based algorithms to reduce the controller updates, by sampling the state only when an event is triggered to maintain stability while also guaranteeing an optimal performance. The event-triggered control algorithms are composed of a feedback controller updated based on sampled state and the event triggering mechanism that determines the transmission time of the output of the controller to a ZOH actuator. Our proposed algorithms use impulsive system approaches. For more details see our published work.

Control in Renewable Energy and Smart Grid

Collaborators: J. P. Hespanha (UCSB)

 

There is a need for computational intelligent controllers that allow the smart grid to self-heal, resist attacks, allow dynamic optimization of the operation, improve power quality and efficiency. Optimal control offers a significant scope for savings by optimizing the functional behavior of any power system.

Our work develops optimization-based control algorithms that can guarantee the optimal performance of voltage source microinverters without any phasor domain analysis or pulse width modulation. Moreover, our proposed framework does not require any plant parameter estimation, but instead, plant information is used to find the controller parameters directly online. For more details see our published work.

Network Security and Multi-Agent Optimization

Collaborators: J. P. Hespanha (UCSB), R. A. Kemmerer (UCSB), G. Vigna (UCSB), T. Hollerer (UCSB), B. Sinopoli (CMU), J. Shamma (GaTech)

 

Network security is a multi-disciplinary field, that involves, optimization, control theory, game theory and computer security. Embedded sensors, computation, and communication have enabled the development of sophisticated sensing devices for a wide range of cyber physical applications that include safety monitoring, health-care, surveillance, traffic monitoring, and military applications. Machine learning is an attractive approach to achieve optimal behavior when classical optimization techniques are infeasible.

We propose sophisticated resilient architectures that guarantee desired behavior and protect the systems from cyber attacks. We have considered byzantine faults, persistent attacks (measurement corruption and jamming attacks) on network teams, and attacks on cyber-missions. Under such circumstances, the defender must be able to adapt its control strategy according to the effects induced by the attackers. For more details see our published work.

Optimal Adaptive Feedback Control and Learning in Games

Collaborators: F. L. Lewis (U. Texas), W. Dixon (U. Florida), G. R. Hudas (US Army TARDEC), R. Babuska (TU Delft), D. Vrabie (UTRC), S. Bhasin (IITD)

 

Adaptive control and optimal control represent different philosophies for designing feedback control systems. These methods have been developed by the Control Systems Community of engineers. Optimal controllers minimize user prescribed performance functions and are normally designed offline by solving Hamilton-Jacobi (HJ) design equations, for example, the Riccati equation, using complete knowledge of the system dynamical model. However, it is often difficult to determine an accurate dynamical model of practical systems.

We propose new techniques based on approximate dynamic programming that allow the design of adaptive control systems with novel structures that learn the solutions to optimal control problems and Nash/Stackelberg games in real time by observing data along the system trajectories. Complicated Lyapunov stability proofs ensure boundedness of the closed-loop signals. For more details see our published work.