Publication Date

Fall 2023

Degree Type

Thesis

Degree Name

Master of Science (MS)

Department

Computer Engineering

Advisor

Stas Tiomkin; Gautam Kumar; Kah Chun Lau

Abstract

The design of appropriate control rules for the stabilization of dynamical systems can require quite substantial domain knowledge. Modern AI methodologies, such as Reinforcement Learning, are often used to mitigate the need for such knowledge. However, these can be slow and often rely on at least some hand-designed reward structure, and thus human input, to be more effective. Here, we propose an alternative route to construct rewards requiring only minimal domain knowledge, essentially relying on the structure of the dynamical system itself. For this, we use truncated Lyapunov exponents as rewards to calculate the stabilizing controller from samples. Concretely, the controller directs the system towards maximally sensitive states. This requires no domain knowledge and only the system dynamics and one parameter (the truncation horizon) to provide an effective stabilization behaviour.

Available for download on Wednesday, February 26, 2025

Share

COinS