Bayesian Inference Approach for Entropy Regularized Reinforcement Learning with Stochastic Dynamics

Publication Date

1-1-2023

Document Type

Conference Proceeding

Publication Title

Proceedings of Machine Learning Research

Volume

216

First Page

99

Last Page

109

Abstract

We develop a novel approach to determine the optimal policy in entropy-regularized reinforcement learning (RL) with stochastic dynamics. For deterministic dynamics, the optimal policy can be derived using Bayesian inference in the control-as-inference framework; however, for stochastic dynamics, the direct use of this approach leads to risk-taking optimistic policies. To address this issue, current approaches in entropy-regularized RL involve a constrained optimization procedure which fixes system dynamics to the original dynamics, however this approach is not consistent with the unconstrained Bayesian inference framework. In this work we resolve this inconsistency by developing an exact mapping from the constrained optimization problem in entropy-regularized RL to a different optimization problem which can be solved using the unconstrained Bayesian inference approach. We show that the optimal policies are the same for both problems, thus our results lead to the exact solution for the optimal policy in entropy-regularized RL with stochastic dynamics through Bayesian inference.

Funding Number

2246221

Funding Sponsor

National Science Foundation

Department

Computer Engineering

This document is currently not available here.

Share

COinS