Extra Material¶
Proof for Don’t Let the Past Distract You¶
In this subsection, we will prove that actions should not be reinforced for rewards obtained in the past.
Expand out in the expression for the simplest policy gradient to obtain:
and consider the term
We will show that for the case of (the reward comes before the action being reinforced), this term is zero. This is a complete proof of the original claim, because after dropping terms with from the expression, we are left with the reward-to-go form of the policy gradient, as desired:
1. Using the Marginal Distribution. To proceed, we have to break down the expectation in . It’s an expectation over trajectories, but the expression inside the expectation only deals with a few states and actions: , , , , and . So in computing the expectation, we only need to worry about the marginal distribution over these random variables.
We derive:
2. Probability Chain Rule. Joint distributions can be calculated in terms of conditional and marginal probabilities via chain rule of probability: . Here, we use this rule to compute
3. Separating Expectations Over Multiple Random Variables. If we have an expectation over two random variables and , we can split it into an inner and outer expectation, where the inner expectation treats the variable from the outer expectation as a constant. Our ability to make this split relies on probability chain rule. Mathematically:
An expectation over can thus be expressed by
4. Constants Can Be Pulled Outside of Expectations. If a term inside an expectation is constant with respect to the variable being expected over, it can be pulled outside of the expectation. To give an example, consider again an expectation over two random variables and , where this time, . Then, using the result from before:
The function in our expectation decomposes this way, allowing us to write:
5. Applying the EGLP Lemma. The last step in our proof relies on the EGLP lemma. At this point, we will only worry about the innermost expectation,
We now have to make a distinction between two cases: , the case where the reward happened before the action, and , where it didn’t.
Case One: Reward Before Action. If , then the conditional probabilities for actions at come from the policy:
the innermost expectation can be broken down farther into
The EGLP lemma says that
allowing us to conclude that for , .
Case Two: Reward After Action. What about the case, though? Why doesn’t the same logic apply? In this case, the conditional probabilities for can’t be broken down the same way, because you’re conditioning on the future. Think about it like this: let’s say that every day, in the morning, you make a choice between going for a jog and going to work early, and you have a 50-50 chance of each option. If you condition on a future where you went to work early, what are the odds that you went for a jog? Clearly, you didn’t. But if you’re conditioning on the past—before you made the decision—what are the odds that you will later go for a jog? Now it’s back to 50-50.
So in the case where , the conditional distribution over actions is not , and the EGLP lemma does not apply.