Twin Delayed DDPG¶
Table of Contents
Background¶
(Previously: Background for DDPG)
While DDPG can achieve great performance sometimes, it is frequently brittle with respect to hyperparameters and other kinds of tuning. A common failure mode for DDPG is that the learned Q-function begins to dramatically overestimate Q-values, which then leads to the policy breaking, because it exploits the errors in the Q-function. Twin Delayed DDPG (TD3) is an algorithm that addresses this issue by introducing three critical tricks:
Trick One: Clipped Double-Q Learning. TD3 learns two Q-functions instead of one (hence “twin”), and uses the smaller of the two Q-values to form the targets in the Bellman error loss functions.
Trick Two: “Delayed” Policy Updates. TD3 updates the policy (and target networks) less frequently than the Q-function. The paper recommends one policy update for every two Q-function updates.
Trick Three: Target Policy Smoothing. TD3 adds noise to the target action, to make it harder for the policy to exploit Q-function errors by smoothing out Q along changes in action.
Together, these three tricks result in substantially improved performance over baseline DDPG.
Quick Facts¶
- TD3 is an off-policy algorithm.
- TD3 can only be used for environments with continuous action spaces.
- The Spinning Up implementation of TD3 does not support parallelization.
Key Equations¶
TD3 concurrently learns two Q-functions, and , by mean square Bellman error minimization, in almost the same way that DDPG learns its single Q-function. To show exactly how TD3 does this and how it differs from normal DDPG, we’ll work from the innermost part of the loss function outwards.
First: target policy smoothing. Actions used to form the Q-learning target are based on the target policy, , but with clipped noise added on each dimension of the action. After adding the clipped noise, the target action is then clipped to lie in the valid action range (all valid actions, , satisfy ). The target actions are thus:
Target policy smoothing essentially serves as a regularizer for the algorithm. It addresses a particular failure mode that can happen in DDPG: if the Q-function approximator develops an incorrect sharp peak for some actions, the policy will quickly exploit that peak and then have brittle or incorrect behavior. This can be averted by smoothing out the Q-function over similar actions, which target policy smoothing is designed to do.
Next: clipped double-Q learning. Both Q-functions use a single target, calculated using whichever of the two Q-functions gives a smaller target value:
and then both are learned by regressing to this target:
Using the smaller Q-value for the target, and regressing towards that, helps fend off overestimation in the Q-function.
Lastly: the policy is learned just by maximizing :
which is pretty much unchanged from DDPG. However, in TD3, the policy is updated less frequently than the Q-functions are. This helps damp the volatility that normally arises in DDPG because of how a policy update changes the target.
Exploration vs. Exploitation¶
TD3 trains a deterministic policy in an off-policy way. Because the policy is deterministic, if the agent were to explore on-policy, in the beginning it would probably not try a wide enough variety of actions to find useful learning signals. To make TD3 policies explore better, we add noise to their actions at training time, typically uncorrelated mean-zero Gaussian noise. To facilitate getting higher-quality training data, you may reduce the scale of the noise over the course of training. (We do not do this in our implementation, and keep noise scale fixed throughout.)
At test time, to see how well the policy exploits what it has learned, we do not add noise to the actions.
You Should Know
Our TD3 implementation uses a trick to improve exploration at the start of training. For a fixed number of steps at the beginning (set with the start_steps
keyword argument), the agent takes actions which are sampled from a uniform random distribution over valid actions. After that, it returns to normal TD3 exploration.
Documentation¶
You Should Know
In what follows, we give documentation for the PyTorch and Tensorflow implementations of TD3 in Spinning Up. They have nearly identical function calls and docstrings, except for details relating to model construction. However, we include both full docstrings for completeness.
Documentation: PyTorch Version¶
-
spinup.
td3_pytorch
(env_fn, actor_critic=<MagicMock spec='str' id='140554319654248'>, ac_kwargs={}, seed=0, steps_per_epoch=4000, epochs=100, replay_size=1000000, gamma=0.99, polyak=0.995, pi_lr=0.001, q_lr=0.001, batch_size=100, start_steps=10000, update_after=1000, update_every=50, act_noise=0.1, target_noise=0.2, noise_clip=0.5, policy_delay=2, num_test_episodes=10, max_ep_len=1000, logger_kwargs={}, save_freq=1)¶ Twin Delayed Deep Deterministic Policy Gradient (TD3)
Parameters: - env_fn – A function which creates a copy of the environment. The environment must satisfy the OpenAI Gym API.
- actor_critic –
The constructor method for a PyTorch Module with an
act
method, api
module, aq1
module, and aq2
module. Theact
method andpi
module should accept batches of observations as inputs, andq1
andq2
should accept a batch of observations and a batch of actions as inputs. When called, these should return:Call Output Shape Description act
(batch, act_dim) Numpy array of actions for eachobservation.pi
(batch, act_dim) Tensor containing actions from policygiven observations.q1
(batch,) Tensor containing one current estimateof Q* for the provided observationsand actions. (Critical: make sure toflatten this!)q2
(batch,) Tensor containing the other currentestimate of Q* for the provided observationsand actions. (Critical: make sure toflatten this!) - ac_kwargs (dict) – Any kwargs appropriate for the ActorCritic object you provided to TD3.
- seed (int) – Seed for random number generators.
- steps_per_epoch (int) – Number of steps of interaction (state-action pairs) for the agent and the environment in each epoch.
- epochs (int) – Number of epochs to run and train agent.
- replay_size (int) – Maximum length of replay buffer.
- gamma (float) – Discount factor. (Always between 0 and 1.)
- polyak (float) –
Interpolation factor in polyak averaging for target networks. Target networks are updated towards main networks according to:
where is polyak. (Always between 0 and 1, usually close to 1.)
- pi_lr (float) – Learning rate for policy.
- q_lr (float) – Learning rate for Q-networks.
- batch_size (int) – Minibatch size for SGD.
- start_steps (int) – Number of steps for uniform-random action selection, before running real policy. Helps exploration.
- update_after (int) – Number of env interactions to collect before starting to do gradient descent updates. Ensures replay buffer is full enough for useful updates.
- update_every (int) – Number of env interactions that should elapse between gradient descent updates. Note: Regardless of how long you wait between updates, the ratio of env steps to gradient steps is locked to 1.
- act_noise (float) – Stddev for Gaussian exploration noise added to policy at training time. (At test time, no noise is added.)
- target_noise (float) – Stddev for smoothing noise added to target policy.
- noise_clip (float) – Limit for absolute value of target policy smoothing noise.
- policy_delay (int) – Policy will only be updated once every policy_delay times for each update of the Q-networks.
- num_test_episodes (int) – Number of episodes to test the deterministic policy at the end of each epoch.
- max_ep_len (int) – Maximum length of trajectory / episode / rollout.
- logger_kwargs (dict) – Keyword args for EpochLogger.
- save_freq (int) – How often (in terms of gap between epochs) to save the current policy and value function.
Saved Model Contents: PyTorch Version¶
The PyTorch saved model can be loaded with ac = torch.load('path/to/model.pt')
, yielding an actor-critic object (ac
) that has the properties described in the docstring for td3_pytorch
.
You can get actions from this model with
actions = ac.act(torch.as_tensor(obs, dtype=torch.float32))
Documentation: Tensorflow Version¶
-
spinup.
td3_tf1
(env_fn, actor_critic=<function mlp_actor_critic>, ac_kwargs={}, seed=0, steps_per_epoch=4000, epochs=100, replay_size=1000000, gamma=0.99, polyak=0.995, pi_lr=0.001, q_lr=0.001, batch_size=100, start_steps=10000, update_after=1000, update_every=50, act_noise=0.1, target_noise=0.2, noise_clip=0.5, policy_delay=2, num_test_episodes=10, max_ep_len=1000, logger_kwargs={}, save_freq=1)¶ Twin Delayed Deep Deterministic Policy Gradient (TD3)
Parameters: - env_fn – A function which creates a copy of the environment. The environment must satisfy the OpenAI Gym API.
- actor_critic –
A function which takes in placeholder symbols for state,
x_ph
, and action,a_ph
, and returns the main outputs from the agent’s Tensorflow computation graph:Symbol Shape Description pi
(batch, act_dim) Deterministically computes actionsfrom policy given states.q1
(batch,) Gives one estimate of Q* forstates inx_ph
and actions ina_ph
.q2
(batch,) Gives another estimate of Q* forstates inx_ph
and actions ina_ph
.q1_pi
(batch,) Gives the composition ofq1
andpi
for states inx_ph
:q1(x, pi(x)). - ac_kwargs (dict) – Any kwargs appropriate for the actor_critic function you provided to TD3.
- seed (int) – Seed for random number generators.
- steps_per_epoch (int) – Number of steps of interaction (state-action pairs) for the agent and the environment in each epoch.
- epochs (int) – Number of epochs to run and train agent.
- replay_size (int) – Maximum length of replay buffer.
- gamma (float) – Discount factor. (Always between 0 and 1.)
- polyak (float) –
Interpolation factor in polyak averaging for target networks. Target networks are updated towards main networks according to:
where is polyak. (Always between 0 and 1, usually close to 1.)
- pi_lr (float) – Learning rate for policy.
- q_lr (float) – Learning rate for Q-networks.
- batch_size (int) – Minibatch size for SGD.
- start_steps (int) – Number of steps for uniform-random action selection, before running real policy. Helps exploration.
- update_after (int) – Number of env interactions to collect before starting to do gradient descent updates. Ensures replay buffer is full enough for useful updates.
- update_every (int) – Number of env interactions that should elapse between gradient descent updates. Note: Regardless of how long you wait between updates, the ratio of env steps to gradient steps is locked to 1.
- act_noise (float) – Stddev for Gaussian exploration noise added to policy at training time. (At test time, no noise is added.)
- target_noise (float) – Stddev for smoothing noise added to target policy.
- noise_clip (float) – Limit for absolute value of target policy smoothing noise.
- policy_delay (int) – Policy will only be updated once every policy_delay times for each update of the Q-networks.
- num_test_episodes (int) – Number of episodes to test the deterministic policy at the end of each epoch.
- max_ep_len (int) – Maximum length of trajectory / episode / rollout.
- logger_kwargs (dict) – Keyword args for EpochLogger.
- save_freq (int) – How often (in terms of gap between epochs) to save the current policy and value function.
Saved Model Contents: Tensorflow Version¶
The computation graph saved by the logger includes:
Key | Value |
---|---|
x |
Tensorflow placeholder for state input. |
a |
Tensorflow placeholder for action input. |
pi |
Deterministically computes an action from the agent, conditioned
on states in
x . |
q1 |
Gives one action-value estimate for states in x and actions in a . |
q2 |
Gives the other action-value estimate for states in x and actions in a . |
This saved model can be accessed either by
- running the trained policy with the test_policy.py tool,
- or loading the whole saved graph into a program with restore_tf_graph.