misc: update kl penaty names#979
misc: update kl penaty names#979richardodliu wants to merge 3 commits intoverl-project:mainfrom richardodliu:main
Conversation
rename kl penlty
|
Could you point the reference to the common sense? |
http://joschu.net/blog/kl-approx.html
|
|
Nice, could you add a reference of naming in the code? Thanks! |
already done |
eric-haibin-lin
left a comment
There was a problem hiding this comment.
Thank you for the PR! We'd like to keep backward compatibility. Could you also add the checks for the old name as well. We can update example and docs with the better name to guide new users
|
|
Is this version ok? |
| def kl_penalty(logprob: torch.FloatTensor, ref_logprob: torch.FloatTensor, kl_penalty) -> torch.FloatTensor: | ||
| """Compute KL divergence given logprob and ref_logprob. | ||
| Copied from https://github.com/huggingface/trl/blob/main/trl/trainer/ppo_trainer.py#L1104 | ||
| reference from https://github.com/OpenRLHF/OpenRLHF/blob/main/openrlhf/models/utils.py#L7 |
There was a problem hiding this comment.
this function originated from https://github.com/huggingface/trl/blob/v0.11.0/trl/trainer/ppo_trainer.py#L1150-L1164
There was a problem hiding this comment.
Yes, but these three methods are from joschu's blog. Trl named these methods in a wrong way. I am trying to correct them.
| @@ -458,18 +458,18 @@ def kl_penalty(logprob: torch.FloatTensor, ref_logprob: torch.FloatTensor, kl_pe | |||
| Returns: | |||
There was a problem hiding this comment.
please update the list of values in docs/examples/config.rst. remove the old names in the doc since we do not want others to use them.
also please search in the codebase for existing values used in scripts. for example:
- examples/reinforce_plus_plus_trainer/run_qwen2-7b_math_rf.sh
- examples/reinforce_plus_plus_trainer/run_qwen2-7b_math_rf_baseline.sh
|
|
||
| """ | ||
| if kl_penalty == "kl": | ||
| if kl_penalty == "k1": |
There was a problem hiding this comment.
we can simply do if kl_penalty in ('kl', 'k1'):
and then remove the verbose changes in verl/trainer/main_ppo.py
|
thanks for the suggestion. moving the changes to #1781 |
### Checklist Before Starting - [x] Search for similar PR(s). This PR includes contribution and suggestions from [richardodliu](https://github.com/richardodliu) in #979 ### What does this PR do? Update documentation page, include key configs for PPO and other recipes. Pending docs: - GRPO - DrGRPO - DAPO, etc TODO: let config.rst directly show the content of ppo_trainer.yaml and other related yaml files. In the yaml file, colocate the comment and explanation with the option. This way the yaml is always consistent with the documentation page. For critical feature or algorithms, we list the core configs in a self-contained page like PPO.md ### High-Level Design None ### Specific Changes - use k1, k2, k3 for the kl calculation, still backward compatible - changed ppo.rst to baseline.md - added ppo.md to explain core options for ppo ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [x] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [x] Add `[BREAKING]` to the PR title if it breaks any API. - [x] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
### Checklist Before Starting - [x] Search for similar PR(s). This PR includes contribution and suggestions from [richardodliu](https://github.com/richardodliu) in verl-project#979 ### What does this PR do? Update documentation page, include key configs for PPO and other recipes. Pending docs: - GRPO - DrGRPO - DAPO, etc TODO: let config.rst directly show the content of ppo_trainer.yaml and other related yaml files. In the yaml file, colocate the comment and explanation with the option. This way the yaml is always consistent with the documentation page. For critical feature or algorithms, we list the core configs in a self-contained page like PPO.md ### High-Level Design None ### Specific Changes - use k1, k2, k3 for the kl calculation, still backward compatible - changed ppo.rst to baseline.md - added ppo.md to explain core options for ppo ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [x] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [x] Add `[BREAKING]` to the PR title if it breaks any API. - [x] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
### Checklist Before Starting - [x] Search for similar PR(s). This PR includes contribution and suggestions from [richardodliu](https://github.com/richardodliu) in verl-project#979 ### What does this PR do? Update documentation page, include key configs for PPO and other recipes. Pending docs: - GRPO - DrGRPO - DAPO, etc TODO: let config.rst directly show the content of ppo_trainer.yaml and other related yaml files. In the yaml file, colocate the comment and explanation with the option. This way the yaml is always consistent with the documentation page. For critical feature or algorithms, we list the core configs in a self-contained page like PPO.md ### High-Level Design None ### Specific Changes - use k1, k2, k3 for the kl calculation, still backward compatible - changed ppo.rst to baseline.md - added ppo.md to explain core options for ppo ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [x] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [x] Add `[BREAKING]` to the PR title if it breaks any API. - [x] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
### Checklist Before Starting - [x] Search for similar PR(s). This PR includes contribution and suggestions from [richardodliu](https://github.com/richardodliu) in verl-project#979 ### What does this PR do? Update documentation page, include key configs for PPO and other recipes. Pending docs: - GRPO - DrGRPO - DAPO, etc TODO: let config.rst directly show the content of ppo_trainer.yaml and other related yaml files. In the yaml file, colocate the comment and explanation with the option. This way the yaml is always consistent with the documentation page. For critical feature or algorithms, we list the core configs in a self-contained page like PPO.md ### High-Level Design None ### Specific Changes - use k1, k2, k3 for the kl calculation, still backward compatible - changed ppo.rst to baseline.md - added ppo.md to explain core options for ppo ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [x] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [x] Add `[BREAKING]` to the PR title if it breaks any API. - [x] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
### Checklist Before Starting - [x] Search for similar PR(s). This PR includes contribution and suggestions from [richardodliu](https://github.com/richardodliu) in verl-project#979 ### What does this PR do? Update documentation page, include key configs for PPO and other recipes. Pending docs: - GRPO - DrGRPO - DAPO, etc TODO: let config.rst directly show the content of ppo_trainer.yaml and other related yaml files. In the yaml file, colocate the comment and explanation with the option. This way the yaml is always consistent with the documentation page. For critical feature or algorithms, we list the core configs in a self-contained page like PPO.md ### High-Level Design None ### Specific Changes - use k1, k2, k3 for the kl calculation, still backward compatible - changed ppo.rst to baseline.md - added ppo.md to explain core options for ppo ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [x] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [x] Add `[BREAKING]` to the PR title if it breaks any API. - [x] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
### Checklist Before Starting - [x] Search for similar PR(s). This PR includes contribution and suggestions from [richardodliu](https://github.com/richardodliu) in verl-project/verl#979 ### What does this PR do? Update documentation page, include key configs for PPO and other recipes. Pending docs: - GRPO - DrGRPO - DAPO, etc TODO: let config.rst directly show the content of ppo_trainer.yaml and other related yaml files. In the yaml file, colocate the comment and explanation with the option. This way the yaml is always consistent with the documentation page. For critical feature or algorithms, we list the core configs in a self-contained page like PPO.md ### High-Level Design None ### Specific Changes - use k1, k2, k3 for the kl calculation, still backward compatible - changed ppo.rst to baseline.md - added ppo.md to explain core options for ppo ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [x] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [x] Add `[BREAKING]` to the PR title if it breaks any API. - [x] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
### Checklist Before Starting - [x] Search for similar PR(s). This PR includes contribution and suggestions from [richardodliu](https://github.com/richardodliu) in verl-project#979 ### What does this PR do? Update documentation page, include key configs for PPO and other recipes. Pending docs: - GRPO - DrGRPO - DAPO, etc TODO: let config.rst directly show the content of ppo_trainer.yaml and other related yaml files. In the yaml file, colocate the comment and explanation with the option. This way the yaml is always consistent with the documentation page. For critical feature or algorithms, we list the core configs in a self-contained page like PPO.md ### High-Level Design None ### Specific Changes - use k1, k2, k3 for the kl calculation, still backward compatible - changed ppo.rst to baseline.md - added ppo.md to explain core options for ppo ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [x] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [x] Add `[BREAKING]` to the PR title if it breaks any API. - [x] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
### Checklist Before Starting - [x] Search for similar PR(s). This PR includes contribution and suggestions from [richardodliu](https://github.com/richardodliu) in verl-project#979 ### What does this PR do? Update documentation page, include key configs for PPO and other recipes. Pending docs: - GRPO - DrGRPO - DAPO, etc TODO: let config.rst directly show the content of ppo_trainer.yaml and other related yaml files. In the yaml file, colocate the comment and explanation with the option. This way the yaml is always consistent with the documentation page. For critical feature or algorithms, we list the core configs in a self-contained page like PPO.md ### High-Level Design None ### Specific Changes - use k1, k2, k3 for the kl calculation, still backward compatible - changed ppo.rst to baseline.md - added ppo.md to explain core options for ppo ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [x] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [x] Add `[BREAKING]` to the PR title if it breaks any API. - [x] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
### Checklist Before Starting - [x] Search for similar PR(s). This PR includes contribution and suggestions from [richardodliu](https://github.com/richardodliu) in verl-project/verl#979 ### What does this PR do? Update documentation page, include key configs for PPO and other recipes. Pending docs: - GRPO - DrGRPO - DAPO, etc TODO: let config.rst directly show the content of ppo_trainer.yaml and other related yaml files. In the yaml file, colocate the comment and explanation with the option. This way the yaml is always consistent with the documentation page. For critical feature or algorithms, we list the core configs in a self-contained page like PPO.md ### High-Level Design None ### Specific Changes - use k1, k2, k3 for the kl calculation, still backward compatible - changed ppo.rst to baseline.md - added ppo.md to explain core options for ppo ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [x] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [x] Add `[BREAKING]` to the PR title if it breaks any API. - [x] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
Rename the KL penalty parameter to be consistent with common sense