Conservative Q-Learning (CQL)

CQL is an extension of Q-learning that addresses the typical overestimation of values induced by the distributional shift between the dataset and the learned policy in offline RL algorithms. A conservative Q-function is learned, such that the expected value of a policy under this Q-function lower-bounds its true value.

Compatible Action Spaces

Discrete

Continuous (Box)

MultiDiscrete

MultiBinary

✔️

So far, we have implemented CQN - CQL applied to DQN, which cannot be used on continuous action spaces. We will soon be adding other CQL extensions of algorithms for offline RL.

Example

import gymnasium as gym
import h5py

from agilerl.components.replay_buffer import ReplayBuffer
from agilerl.components.data import Transition
from agilerl.algorithms.cqn import CQN

# Create environment and Experience Replay Buffer, and load dataset
env = gym.make('CartPole-v1')
observation_space = env.observation_space
action_space = env.action_space

memory = ReplayBuffer(max_size=10000)
dataset = h5py.File('data/cartpole/cartpole_random_v1.1.0.h5', 'r')  # Load dataset

# Save transitions to replay buffer
dataset_length = dataset['rewards'].shape[0]
for i in range(dataset_length-1):
    obs = dataset['observations'][i]
    next_obs = dataset['observations'][i+1]

    action = dataset['actions'][i]
    reward = dataset['rewards'][i]
    done = bool(dataset['terminals'][i])
    transition = Transition(
        obs=obs,
        action=action,
        reward=reward,
        next_obs=next_obs,
        done=done,
    )
    transition = transition.unsqueeze(0)
    transition.batch_size = [1]
    transition = transition.to_tensordict()
    memory.add(transition)

# Create CQN agent
agent = CQN(observation_space=observation_space, action_space=action_space)
while True:
    experiences = memory.sample(agent.batch_size)   # Sample replay buffer
    agent.learn(experiences)    # Learn according to agent's RL algorithm

Neural Network Configuration

To configure the architecture of the network’s encoder / head, pass a kwargs dict to the CQN net_config field. Full arguments can be found in the documentation of EvolvableMLP, EvolvableCNN, and EvolvableMultiInput.

For discrete / vector observations:

NET_CONFIG = {
      "encoder_config": {'hidden_size': [32, 32]},  # Network head hidden size
      "head_config": {'hidden_size': [32]}      # Network head hidden size
  }

For image observations:

NET_CONFIG = {
    "encoder_config": {
      'channel_size': [32, 32], # CNN channel size
      'kernel_size': [8, 4],   # CNN kernel size
      'stride_size': [4, 2],   # CNN stride size
    },
    "head_config": {'hidden_size': [32]}  # Network head hidden size
  }

For dictionary / tuple observations containing any combination of image, discrete, and vector observations:

CNN_CONFIG = {
    "channel_size": [32, 32], # CNN channel size
    "kernel_size": [8, 4],   # CNN kernel size
    "stride_size": [4, 2],   # CNN stride size
}

NET_CONFIG = {
    "encoder_config": {
      "latent_dim": 32,
      # Config for nested EvolvableCNN objects
      "cnn_config": CNN_CONFIG,
      # Config for nested EvolvableMLP objects
      "mlp_config": {
          "hidden_size": [32, 32]
      },
      "vector_space_mlp": True # Process vector observations with an MLP
    },
    "head_config": {'hidden_size': [32]}  # Network head hidden size
  }
# Create CQN agent
agent = CQN(
  observation_space=observation_space,
  action_space=action_space,
  net_config=NET_CONFIG
  )

Evolutionary Hyperparameter Optimization

AgileRL allows for efficient hyperparameter optimization during training to provide state-of-the-art results in a fraction of the time. For more information on how this is done, please refer to the Evolutionary Hyperparameter Optimization documentation.

Saving and Loading Agents

To save an agent, use the save_checkpoint method:

from agilerl.algorithms.cqn import CQN

# Create CQN agent
agent = CQN(observation_space, action_space)

checkpoint_path = "path/to/checkpoint"
agent.save_checkpoint(checkpoint_path)

To load a saved agent, use the load method:

from agilerl.algorithms.cqn import CQN

checkpoint_path = "path/to/checkpoint"
agent = CQN.load(checkpoint_path)

Parameters

class agilerl.algorithms.cqn.CQN(*args: Any, **kwargs: Any)

The CQN algorithm class. CQN paper: https://arxiv.org/abs/2006.04779.

Parameters:
  • observation_space (spaces.Space) – The observation space of the environment.

  • action_space (spaces.Space) – The action space of the environment.

  • index (int, optional) – Index to keep track of object instance during tournament selection and mutation, defaults to 0

  • hp_config (HyperparameterConfig, optional) – RL hyperparameter mutation configuration, defaults to None, whereby algorithm mutations are disabled.

  • net_config (dict, optional) – Network configuration, defaults to None

  • batch_size (int, optional) – Size of batched sample from replay buffer for learning, defaults to 64

  • lr (float, optional) – Learning rate for optimizer, defaults to 1e-4

  • learn_step (int, optional) – Learning frequency, defaults to 5

  • gamma (float, optional) – Discount factor, defaults to 0.99

  • tau (float, optional) – For soft update of target network parameters, defaults to 1e-3

  • double (bool, optional) – Use double Q-learning, defaults to False

  • normalize_images (bool, optional) – Normalize image observations, defaults to True

  • mut (str, optional) – Most recent mutation to agent, defaults to None

  • actor_network (nn.Module, optional) – Custom actor network, defaults to None

  • device (str, optional) – Device for accelerated computing, ‘cpu’ or ‘cuda’, defaults to ‘cpu’

  • accelerator (accelerate.Accelerator(), optional) – Accelerator for distributed computing, defaults to None

  • wrap (bool, optional) – Wrap models for distributed training upon creation, defaults to True

clean_up() None

Clean up the algorithm by deleting the networks and optimizers.

Returns:

None

Return type:

None

clone(index: int | None = None, wrap: bool = True) Self

Create a clone of the algorithm.

Parameters:
  • index (int | None, optional) – The index of the clone, defaults to None

  • wrap (bool, optional) – If True, wrap the models in the clone with the accelerator, defaults to False

Returns:

A clone of the algorithm

Return type:

EvolvableAlgorithm

static copy_attributes(agent: SelfEvolvableAlgorithm, clone: SelfEvolvableAlgorithm) SelfEvolvableAlgorithm

Copy the non-evolvable attributes of the algorithm to a clone.

Parameters:

clone (SelfEvolvableAlgorithm) – The clone of the algorithm.

Returns:

The clone of the algorithm.

Return type:

SelfEvolvableAlgorithm

evolvable_attributes(networks_only: bool = False) dict[str, EvolvableModuleProtocol | ModuleDictProtocol | Optimizer | dict[str, Optimizer] | OptimizerWrapperProtocol]

Return the attributes related to the evolvable networks in the algorithm. Includes attributes that are either EvolvableModule or ModuleDict objects, as well as the optimizers associated with the networks.

Parameters:

networks_only (bool, optional) – If True, only include evolvable networks, defaults to False

Returns:

A dictionary of network attributes.

Return type:

dict[str, Any]

get_action(obs: ndarray | dict[str, ndarray] | tuple[ndarray, ...] | Tensor | TensorDict | tuple[Tensor, ...] | dict[str, Tensor] | Number | list[ReasoningPrompts], epsilon: float = 0, action_mask: ndarray | None = None, *args: Any, **kwargs: Any) ndarray

Return the next action to take in the environment. Epsilon is the probability of taking a random action, used for exploration. For greedy behaviour, set epsilon to 0.

Parameters:
  • obs (numpy.ndarray[float]) – State observation, or multiple observations in a batch

  • epsilon (float, optional) – Probability of taking a random action for exploration, defaults to 0

  • action_mask (numpy.ndarray, optional) – Mask of legal actions 1=legal 0=illegal, defaults to None

Returns:

Action to take in the environment

Return type:

numpy.ndarray[int]

static get_action_dim(action_space: Box | Discrete | MultiDiscrete | Dict | Tuple | MultiBinary | list[Box | Discrete | MultiDiscrete | Dict | Tuple | MultiBinary]) tuple[int, ...]

Return the dimension of the action space as it pertains to the underlying networks (i.e. the output size of the networks).

Parameters:

action_space (spaces.Space or list[spaces.Space].) – The action space of the environment.

Returns:

The dimension of the action space.

Return type:

int.

get_lr_names() list[str]

Return the learning rates of the algorithm.

get_policy() EvolvableModuleProtocol

Return the policy network of the algorithm.

static get_state_dim(observation_space: Box | Discrete | MultiDiscrete | Dict | Tuple | MultiBinary | list[Box | Discrete | MultiDiscrete | Dict | Tuple | MultiBinary]) tuple[int, ...]

Return the dimension of the state space as it pertains to the underlying networks (i.e. the input size of the networks).

Parameters:

observation_space (spaces.Space or list[spaces.Space].) – The observation space of the environment.

Returns:

The dimension of the state space.

Return type:

tuple[int, …].

property index: int

Return the index of the algorithm.

static inspect_attributes(agent: SelfEvolvableAlgorithm, input_args_only: bool = False) dict[str, Any]

Inspect and retrieve the attributes of the current object, excluding attributes related to the underlying evolvable networks (i.e. EvolvableModule, torch.optim.Optimizer) and with an option to include only the attributes that are input arguments to the constructor.

Parameters:

input_args_only (bool) – If True, only include attributes that are input arguments to the constructor. Defaults to False.

Returns:

A dictionary of attribute names and their values.

Return type:

dict[str, Any]

learn(experiences: tuple[Tensor, ...]) float

Update agent network parameters to learn from experiences.

Parameters:

experiences – List of batched states, actions, rewards, next_states, dones in that order.

Returns:

Loss from learning

Return type:

float

classmethod load(path: str, device: str | device = 'cpu', accelerator: Accelerator | None = None) Self

Load an algorithm from a checkpoint.

Parameters:
  • path (string) – Location to load checkpoint from.

  • device (str, optional) – Device to load the algorithm on, defaults to ‘cpu’

  • accelerator (Accelerator | None, optional) – Accelerator object for distributed computing, defaults to None

Returns:

An instance of the algorithm

Return type:

RLAlgorithm

load_checkpoint(path: str) None

Load saved agent properties and network weights from checkpoint.

Parameters:

path (string) – Location to load checkpoint from

property mut: Any

Return the mutation object of the algorithm.

mutation_hook() None

Execute the hooks registered with the algorithm.

classmethod population(size: int, observation_space: Box | Discrete | MultiDiscrete | Dict | Tuple | MultiBinary | list[Box | Discrete | MultiDiscrete | Dict | Tuple | MultiBinary], action_space: Box | Discrete | MultiDiscrete | Dict | Tuple | MultiBinary | list[Box | Discrete | MultiDiscrete | Dict | Tuple | MultiBinary], wrapper_cls: type[SelfAgentWrapper] | None = None, wrapper_kwargs: dict[str, Any] | None = None, **kwargs) list[Self | SelfAgentWrapper]

Create a population of algorithms.

Parameters:

size (int.) – The size of the population.

Returns:

A list of algorithms.

Return type:

list[SelfEvolvableAlgorithm].

preprocess_observation(observation: ndarray | dict[str, ndarray] | tuple[ndarray, ...] | Tensor | TensorDict | tuple[Tensor, ...] | dict[str, Tensor] | Number | list[ReasoningPrompts]) Tensor | TensorDict | tuple[Tensor, ...] | dict[str, Tensor]

Preprocesses observations for forward pass through neural network.

Parameters:

observations (ObservationType) – Observations of environment

Returns:

Preprocessed observations

Return type:

torch.Tensor[float] or dict[str, torch.Tensor[float]] or tuple[torch.Tensor[float], …]

recompile() None

Recompiles the evolvable modules in the algorithm with the specified torch compiler.

register_mutation_hook(hook: Callable) None

Register a hook to be executed after a mutation is performed on the algorithm.

Parameters:

hook (Callable) – The hook to be executed after mutation.

register_network_group(group: NetworkGroup) None

Set the evaluation network for the algorithm.

Parameters:

name (str) – The name of the evaluation network.

reinit_optimizers(optimizer: OptimizerConfig | None = None) None

Reinitialize the optimizers of an algorithm. If no optimizer is passed, all optimizers are reinitialized.

Parameters:

optimizer (OptimizerConfig | None, optional) – The optimizer to reinitialize, defaults to None, in which case all optimizers are reinitialized.

save_checkpoint(path: str) None

Save a checkpoint of agent properties and network weights to path.

Parameters:

path (string) – Location to save checkpoint at

set_training_mode(training: bool) None

Set the training mode of the algorithm.

Parameters:

training (bool) – If True, set the algorithm to training mode.

soft_update() None

Soft updates target network.

test(env: str | Env | VectorEnv | AsyncVectorEnv, swap_channels: bool = False, max_steps: int | None = None, loop: int = 3) float

Return mean test score of agent in environment with epsilon-greedy policy.

Parameters:
  • env (Gym-style environment) – The environment to be tested in

  • swap_channels (bool, optional) – Swap image channels dimension from last to first [H, W, C] -> [C, H, W], defaults to False

  • max_steps (int, optional) – Maximum number of testing steps, defaults to None.

  • loop (int, optional) – Number of testing loops/episodes to complete. The returned score is the mean. Defaults to 3

to_device(*experiences: Tensor | TensorDict | tuple[Tensor, ...] | dict[str, Tensor]) tuple[Tensor | TensorDict | tuple[Tensor, ...] | dict[str, Tensor], ...]

Move experiences to the device.

Parameters:

experiences (tuple[torch.Tensor[float], ...]) – Experiences to move to device

Returns:

Experiences on the device

Return type:

tuple[torch.Tensor[float], …]

unwrap_models() None

Unwraps the models in the algorithm from the accelerator.

wrap_models() None

Wrap the models in the algorithm with the accelerator.