Trust Region Policy Optimization¶
Table of Contents
Background¶
(Previously: Background for VPG)
TRPO updates policies by taking the largest step possible to improve performance, while satisfying a special constraint on how close the new and old policies are allowed to be. The constraint is expressed in terms of KL-Divergence, a measure of (something like, but not exactly) distance between probability distributions.
This is different from normal policy gradient, which keeps new and old policies close in parameter space. But even seemingly small differences in parameter space can have very large differences in performance—so a single bad step can collapse the policy performance. This makes it dangerous to use large step sizes with vanilla policy gradients, thus hurting its sample efficiency. TRPO nicely avoids this kind of collapse, and tends to quickly and monotonically improve performance.
Quick Facts¶
- TRPO is an on-policy algorithm.
- TRPO can be used for environments with either discrete or continuous action spaces.
- The Spinning Up implementation of TRPO supports parallelization with MPI.
Key Equations¶
Let denote a policy with parameters . The theoretical TRPO update is:
where is the surrogate advantage, a measure of how policy performs relative to the old policy using data from the old policy:
and is an average KL-divergence between policies across states visited by the old policy:
You Should Know
The objective and constraint are both zero when . Furthermore, the gradient of the constraint with respect to is zero when . Proving these facts requires some subtle command of the relevant math—it’s an exercise worth doing, whenever you feel ready!
The theoretical TRPO update isn’t the easiest to work with, so TRPO makes some approximations to get an answer quickly. We Taylor expand the objective and constraint to leading order around :
resulting in an approximate optimization problem,
You Should Know
By happy coincidence, the gradient of the surrogate advantage function with respect to , evaluated at , is exactly equal to the policy gradient, ! Try proving this, if you feel comfortable diving into the math.
This approximate problem can be analytically solved by the methods of Lagrangian duality [1], yielding the solution:
If we were to stop here, and just use this final result, the algorithm would be exactly calculating the Natural Policy Gradient. A problem is that, due to the approximation errors introduced by the Taylor expansion, this may not satisfy the KL constraint, or actually improve the surrogate advantage. TRPO adds a modification to this update rule: a backtracking line search,
where is the backtracking coefficient, and is the smallest nonnegative integer such that satisfies the KL constraint and produces a positive surrogate advantage.
Lastly: computing and storing the matrix inverse, , is painfully expensive when dealing with neural network policies with thousands or millions of parameters. TRPO sidesteps the issue by using the conjugate gradient algorithm to solve for , requiring only a function which can compute the matrix-vector product instead of computing and storing the whole matrix directly. This is not too hard to do: we set up a symbolic operation to calculate
which gives us the correct output without computing the whole matrix.
[1] | See Convex Optimization by Boyd and Vandenberghe, especially chapters 2 through 5. |
Exploration vs. Exploitation¶
TRPO trains a stochastic policy in an on-policy way. This means that it explores by sampling actions according to the latest version of its stochastic policy. The amount of randomness in action selection depends on both initial conditions and the training procedure. Over the course of training, the policy typically becomes progressively less random, as the update rule encourages it to exploit rewards that it has already found. This may cause the policy to get trapped in local optima.
Documentation¶
You Should Know
Spinning Up currently only has a Tensorflow implementation of TRPO.
-
spinup.
trpo_tf1
(env_fn, actor_critic=<function mlp_actor_critic>, ac_kwargs={}, seed=0, steps_per_epoch=4000, epochs=50, gamma=0.99, delta=0.01, vf_lr=0.001, train_v_iters=80, damping_coeff=0.1, cg_iters=10, backtrack_iters=10, backtrack_coeff=0.8, lam=0.97, max_ep_len=1000, logger_kwargs={}, save_freq=10, algo='trpo')¶ Trust Region Policy Optimization
(with support for Natural Policy Gradient)
Parameters: - env_fn – A function which creates a copy of the environment. The environment must satisfy the OpenAI Gym API.
- actor_critic –
A function which takes in placeholder symbols for state,
x_ph
, and action,a_ph
, and returns the main outputs from the agent’s Tensorflow computation graph:Symbol Shape Description pi
(batch, act_dim) Samples actions from policy givenstates.logp
(batch,) Gives log probability, according tothe policy, of taking actionsa_ph
in statesx_ph
.logp_pi
(batch,) Gives log probability, according tothe policy, of the action sampled bypi
.info
N/A A dict of any intermediate quantities(from calculating the policy or logprobabilities) which are needed foranalytically computing KL divergence.(eg sufficient statistics of thedistributions)info_phs
N/A A dict of placeholders for old valuesof the entries ininfo
.d_kl
() A symbol for computing the mean KLdivergence between the current policy(pi
) and the old policy (asspecified by the inputs toinfo_phs
) over the batch ofstates given inx_ph
.v
(batch,) Gives the value estimate for statesinx_ph
. (Critical: make sureto flatten this!) - ac_kwargs (dict) – Any kwargs appropriate for the actor_critic function you provided to TRPO.
- seed (int) – Seed for random number generators.
- steps_per_epoch (int) – Number of steps of interaction (state-action pairs) for the agent and the environment in each epoch.
- epochs (int) – Number of epochs of interaction (equivalent to number of policy updates) to perform.
- gamma (float) – Discount factor. (Always between 0 and 1.)
- delta (float) – KL-divergence limit for TRPO / NPG update. (Should be small for stability. Values like 0.01, 0.05.)
- vf_lr (float) – Learning rate for value function optimizer.
- train_v_iters (int) – Number of gradient descent steps to take on value function per epoch.
- damping_coeff (float) –
Artifact for numerical stability, should be smallish. Adjusts Hessian-vector product calculation:
where is the damping coefficient. Probably don’t play with this hyperparameter.
- cg_iters (int) –
Number of iterations of conjugate gradient to perform. Increasing this will lead to a more accurate approximation to , and possibly slightly-improved performance, but at the cost of slowing things down.
Also probably don’t play with this hyperparameter.
- backtrack_iters (int) – Maximum number of steps allowed in the backtracking line search. Since the line search usually doesn’t backtrack, and usually only steps back once when it does, this hyperparameter doesn’t often matter.
- backtrack_coeff (float) – How far back to step during backtracking line search. (Always between 0 and 1, usually above 0.5.)
- lam (float) – Lambda for GAE-Lambda. (Always between 0 and 1, close to 1.)
- max_ep_len (int) – Maximum length of trajectory / episode / rollout.
- logger_kwargs (dict) – Keyword args for EpochLogger.
- save_freq (int) – How often (in terms of gap between epochs) to save the current policy and value function.
- algo – Either ‘trpo’ or ‘npg’: this code supports both, since they are almost the same.
Saved Model Contents¶
The computation graph saved by the logger includes:
Key | Value |
---|---|
x |
Tensorflow placeholder for state input. |
pi |
Samples an action from the agent, conditioned on states in x . |
v |
Gives value estimate for states in x . |
This saved model can be accessed either by
- running the trained policy with the test_policy.py tool,
- or loading the whole saved graph into a program with restore_tf_graph.
References¶
Relevant Papers¶
- Trust Region Policy Optimization, Schulman et al. 2015
- High Dimensional Continuous Control Using Generalized Advantage Estimation, Schulman et al. 2016
- Approximately Optimal Approximate Reinforcement Learning, Kakade and Langford 2002
Why These Papers?¶
Schulman 2015 is included because it is the original paper describing TRPO. Schulman 2016 is included because our implementation of TRPO makes use of Generalized Advantage Estimation for computing the policy gradient. Kakade and Langford 2002 is included because it contains theoretical results which motivate and deeply connect to the theoretical foundations of TRPO.