Extra Material¶
Proof for Using Q-Function in Policy Gradient Formula¶
In this section, we will show that
for the finite-horizon undiscounted return setting. (An analagous result holds in the infinite-horizon discounted case using basically the same proof.)
The proof of this claim depends on the law of iterated expectations. First, let’s rewrite the expression for the policy gradient, starting from the reward-to-go form (using the notation to help shorten things):
Define as the trajectory up to time , and as the remainder of the trajectory after that. By the law of iterated expectations, we can break up the preceding expression into:
The grad-log-prob is constant with respect to the inner expectation (because it depends on and , which the inner expectation conditions on as fixed in ), so it can be pulled out, leaving:
In Markov Decision Processes, the future only depends on the most recent state and action. As a result, the inner expectation—which expects over the future, conditioned on the entirety of the past (everything up to time )—is equal to the same expectation if it only conditioned on the last timestep (just ):
which is the definition of : the expected return, starting from state and action , when acting on-policy for the rest of the trajectory.
The result follows immediately.