Utilizing Prior Solutions for Reward Shaping and Composition in Entropy-Regularized Reinforcement Learning
Publication Date
6-27-2023
Document Type
Conference Proceeding
Publication Title
Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
Volume
37
DOI
10.1609/aaai.v37i6.25817
First Page
6658
Last Page
6665
Abstract
In reinforcement learning (RL), the ability to utilize prior knowledge from previously solved tasks can allow agents to quickly solve new problems. In some cases, these new problems may be approximately solved by composing the solutions of previously solved primitive tasks (task composition). Otherwise, prior knowledge can be used to adjust the reward function for a new problem, in a way that leaves the optimal policy unchanged but enables quicker learning (reward shaping). In this work, we develop a general framework for reward shaping and task composition in entropy-regularized RL. To do so, we derive an exact relation connecting the optimal soft value functions for two entropy-regularized RL problems with different reward functions and dynamics. We show how the derived relation leads to a general result for reward shaping in entropy-regularized RL. We then generalize this approach to derive an exact relation connecting optimal value functions for the composition of multiple tasks in entropy-regularized RL. We validate these theoretical contributions with experiments showing that reward shaping and task composition lead to faster learning in various settings.
Funding Number
DMS-1854350
Funding Sponsor
National Science Foundation
Department
Computer Engineering
Recommended Citation
Jacob Adamczyk, Argenis Arriojas, Stas Tiomkin, and Rahul V. Kulkarni. "Utilizing Prior Solutions for Reward Shaping and Composition in Entropy-Regularized Reinforcement Learning" Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 (2023): 6658-6665. https://doi.org/10.1609/aaai.v37i6.25817