MPI Tools¶
Table of Contents
Core MPI Utilities¶
-
spinup.utils.mpi_tools.
mpi_fork
(n, bind_to_core=False)[source]¶ Re-launches the current script with workers linked by MPI.
Also, terminates the original process that launched it.
Taken almost without modification from the Baselines function of the same name.
Parameters: - n (int) – Number of process to split into.
- bind_to_core (bool) – Bind each MPI process to a core.
-
spinup.utils.mpi_tools.
mpi_statistics_scalar
(x, with_min_and_max=False)[source]¶ Get mean/std and optional min/max of scalar x across MPI processes.
Parameters: - x – An array containing samples of the scalar to produce statistics for.
- with_min_and_max (bool) – If true, return min and max of x in addition to mean and std.
MPI + PyTorch Utilities¶
spinup.utils.mpi_pytorch
contains a few tools to make it easy to do data-parallel PyTorch optimization across MPI processes. The two main ingredients are syncing parameters and averaging gradients before they are used by the adaptive optimizer. Also there’s a hacky fix for a problem where the PyTorch instance in each separate process tries to get too many threads, and they start to clobber each other.
The pattern for using these tools looks something like this:
- At the beginning of the training script, call
setup_pytorch_for_mpi()
. (Avoids clobbering problem.) - After you’ve constructed a PyTorch module, call
sync_params(module)
. - Then, during gradient descent, call
mpi_avg_grads
after the backward pass, like so:
optimizer.zero_grad()
loss = compute_loss(module)
loss.backward()
mpi_avg_grads(module) # averages gradient buffers across MPI processes!
optimizer.step()
-
spinup.utils.mpi_pytorch.
mpi_avg_grads
(module)[source]¶ Average contents of gradient buffers across MPI processes.
MPI + Tensorflow Utilities¶
The spinup.utils.mpi_tf
contains a a few tools to make it easy to use the AdamOptimizer across many MPI processes. This is a bit hacky—if you’re looking for something more sophisticated and general-purpose, consider horovod.
-
class
spinup.utils.mpi_tf.
MpiAdamOptimizer
(**kwargs)[source]¶ Adam optimizer that averages gradients across MPI processes.
The compute_gradients method is taken from Baselines MpiAdamOptimizer. For documentation on method arguments, see the Tensorflow docs page for the base AdamOptimizer.