Skip to content

[rllib] Modularize policy graph and trainer construction #4788

@ericl

Description

@ericl

Describe the problem

A couple improvements could be made to make it easier to customize policy graphs and trainers, without needing to directly modify the RLlib source code. This would be inline with the example here (but also including a builder for the policy graph itself): https://gist.github.com/ericl/0d3502f204c7612a429bfd3c3aba0307

For example:

PPOPolicyGraph = build_tf_policy_graph(
   model, loss_inputs, loss, ...?)
PPOTrainer = build_trainer(
    "PPO",
    default_config=DEFAULT_CONFIG,
    policy_graph=PPOPolicyGraph,
    make_optimizer=make_optimizer,
    validate_config=validate_config,
    after_optimizer_step=update_kl,
    before_train_step=warn_about_obs_filter,
    after_train_result=warn_about_bad_reward_scales)

We can also try to expose more of the loss input tensors to the Model class itself, so that custom losses can be defined without needing to modify the policy graph itself (though obviously more complex losses may still require changes).

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions