Skip to content

DOC Fix typos, wording in plot_gradient_boosting_regression.py#16894

Merged
thomasjpfan merged 2 commits intoscikit-learn:masterfrom
lucyleeow:DOC_gradboostreg
Apr 11, 2020
Merged

DOC Fix typos, wording in plot_gradient_boosting_regression.py#16894
thomasjpfan merged 2 commits intoscikit-learn:masterfrom
lucyleeow:DOC_gradboostreg

Conversation

@lucyleeow
Copy link
Copy Markdown
Member

Reference Issues/PRs

None

What does this implement/fix? Explain your changes.

  • Makes some parts less wordy
  • Remove We set random state to be consistent with the result. as code below it does not use random_state
  • Explains what loss parameter does, which is done for all previous parameters
  • change clf to reg in the code as this is regression model
  • Grammar fixes

Any other comments?

# However there are many other options (check
# :class:`~sklearn.ensemble.GradientBoostingRegressor` to see what are
# other possibilities)
# loss : loss function to optimize. Least squares function used in this case
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# loss : loss function to optimize. Least squares function used in this case
# loss : loss function to optimize. The least squares function is used in this case


clf = ensemble.GradientBoostingRegressor(**params)
clf.fit(X_train, y_train)
reg = ensemble.GradientBoostingRegressor(**params)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is much appreciated.

# predictive feature, "bp", is also the same for the 2 methods. The remaining
# For this example, the impurity-based and permutation methods identify the
# same 2 strongly predictive features but not in the same order. The third most
# predictive feature, ``bp``, is also the same for the 2 methods. The remaining
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am +0 on this change, since "bp" is not code.

Suggested change
# predictive feature, ``bp``, is also the same for the 2 methods. The remaining
# predictive feature, "bp", is also the same for the 2 methods. The remaining

# regression model. You can play with those parameters to see how the
# results change:
# for testing. We will also set the regression model parameters. You can play
# with these parameters to see how the results change:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# with these parameters to see how the results change:
# with these parameters to see how the results change.

Copy link
Copy Markdown
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @lucyleeow , LGTM when comments are addressed

#
# Finally, we will visualize the results. To do that we will first compute the
# test set deviance and then plot it.
# test set deviance and then plot it against boosting iteration.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# test set deviance and then plot it against boosting iteration.
# test set deviance and then plot it against boosting iterations.

@lucyleeow
Copy link
Copy Markdown
Member Author

Thanks @thomasjpfan and @NicolasHug. Suggestions added.

@thomasjpfan thomasjpfan merged commit 8122e77 into scikit-learn:master Apr 11, 2020
@lucyleeow lucyleeow deleted the DOC_gradboostreg branch April 15, 2020 15:14
gio8tisu pushed a commit to gio8tisu/scikit-learn that referenced this pull request May 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants