Average a scalar or vector over MPI processes.
Re-launches the current script with workers linked by MPI.
Also, terminates the original process that launched it.
Taken almost without modification from the Baselines function of the same name.
- n (int) – Number of process to split into.
- bind_to_core (bool) – Bind each MPI process to a core.
Get mean/std and optional min/max of scalar x across MPI processes.
- x – An array containing samples of the scalar to produce statistics for.
- with_min_and_max (bool) – If true, return min and max of x in addition to mean and std.
Count active MPI processes.
Get rank of calling process.
spinup.utils.mpi_pytorch contains a few tools to make it easy to do data-parallel PyTorch optimization across MPI processes. The two main ingredients are syncing parameters and averaging gradients before they are used by the adaptive optimizer. Also there’s a hacky fix for a problem where the PyTorch instance in each separate process tries to get too many threads, and they start to clobber each other.
The pattern for using these tools looks something like this:
- At the beginning of the training script, call
setup_pytorch_for_mpi(). (Avoids clobbering problem.)
- After you’ve constructed a PyTorch module, call
- Then, during gradient descent, call
mpi_avg_gradsafter the backward pass, like so:
optimizer.zero_grad() loss = compute_loss(module) loss.backward() mpi_avg_grads(module) # averages gradient buffers across MPI processes! optimizer.step()
Average contents of gradient buffers across MPI processes.
Avoid slowdowns caused by each separate process’s PyTorch using more than its fair share of CPU resources.
Sync all parameters of module across all MPI processes.
spinup.utils.mpi_tf contains a a few tools to make it easy to use the AdamOptimizer across many MPI processes. This is a bit hacky—if you’re looking for something more sophisticated and general-purpose, consider horovod.
Adam optimizer that averages gradients across MPI processes.
apply_gradients(grads_and_vars, global_step=None, name=None)¶
Same as normal apply_gradients, except sync params after update.
compute_gradients(loss, var_list, **kwargs)¶
Same as normal compute_gradients, except average grads over processes.
Sync all tf variables across MPI processes.