API reference for cpprb
cpprb Package
cpprb: Fast Flexible Replay Buffer Library
cpprb provides replay buffer classes for reinforcement learning. Details are described at Home Page.
Examples
Replay Buffer classes can be imported from cpprb package.
>>> from cpprb import ReplayBuffer
These buffer classes can be created by specifying env_dict.
>>> buffer_size = 1e6
>>> env_dict = {"obs": {}, "act": {}, "rew": {}, "next_obs": {}, "done": {}}
>>> rb = ReplayBuffer(buffer_size, env_dict)
When adding transitions, all values must be passed as keyword arguments.
>>> rb.add(obs=1, act=1, rew=0.5, next_obs=2, done=0)
You can also add multiple transitions simultaneously.
>>> rb.add(obs=[1, 2], act=[1, 2], rew=[0.5, 0.3], next_obs=[2, 3], done=[0, 1])
At the episode end, users must call on_episode_end() method.
>>> rb.on_episode_end()
Transitions can be sampled according to these buffer’s algorithms (e.g. random).
>>> sample = rb.sample(32)
Functions
|
Create specified version of replay buffer |
|
Train RL policy (model) |
Classes
|
Replay Buffer class to store transitions and to sample them randomly. |
|
Prioritized Replay Buffer class to store transitions with priorities. |
|
Multi-process support Replay Buffer class to store transitions and to sample them randomly. |
|
Multi-process support Prioritized Replay Buffer class to store transitions with priorities. |
|
Replay Buffer class for Reverse Experience Replay (RER) |
|
Helper class for Large Batch Experience Replay (LaBER) |
|
Helper class for Large Batch Experience Replay (LaBER) |
|
Helper class for Large Batch Experience Replay (LaBER) |
|
Replay Buffer class for Hindsight Experience Replay (HER) |
Class Inheritance Diagram
