ProV Logo
0

Real-Time Reinforcement Learning of Cons...
Krishnamurthy, Vikra...
Real-Time Reinforcement Learning of Constrained Markov Decision Processes with Weak Derivatives by Krishnamurthy, Vikram ( Author )
Australian National University
20-07-2023
We present on-line policy gradient algorithms for computing the locally optimal policy of a constrained, average cost, finite state Markov Decision Process. The stochastic approximation algorithms require estimation of the gradient of the cost function with respect to the parameter that characterizes the randomized policy. We propose a spherical coordinate parametrization and present a novel simulation based gradient estimation scheme involving weak derivatives (measure-valued differentiation). Such methods have substantially reduced variance compared to the widely used score function method. Similar to neuro-dynamic programming algorithms (e.g. Q-learning or Temporal Difference methods), the algorithms proposed in this paper are simulation based and do not require explicit knowledge of the underlying parameters such as transition probabilities. However, unlike neuro-dynamic programming methods, the algorithms proposed here can handle constraints and time varying parameters. Numerical examples are given to illustrate the performance of the algorithms. This paper was originally written in 2004. One reason we are putting this on arxiv now is that the score function gradient estimator continues to be used in the online reinforcement learning literature even though its variance grows as $O(n)$ given $n$ data points (for a Markov process). In comparison the weak derivative estimator has significantly smaller variance of $O(1)$ as reported in this paper (and elsewhere).
-
Article
pdf
29.34 KB
English
-
MYR 0.01
-
http://arxiv.org/abs/1110.4946
Share this eBook