Skip to content

add doctests for bart like seq2seq models#15987

Merged
patil-suraj merged 15 commits intohuggingface:masterfrom
patil-suraj:doctest-bart
Mar 9, 2022
Merged

add doctests for bart like seq2seq models#15987
patil-suraj merged 15 commits intohuggingface:masterfrom
patil-suraj:doctest-bart

Conversation

@patil-suraj
Copy link
Contributor

What does this PR do?

Enable doctests for bart like seq2seq models.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Mar 8, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing!

Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, GitHub was not showing me everything. ELLIPSIS is activated by default, so there is no need to add it in comments. We do have one that ignores the outputs and should be uses for those torch Generators.

You seem to have one badly closed docstring or something like that.

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice!

@patil-suraj patil-suraj merged commit a69e185 into huggingface:master Mar 9, 2022
@patil-suraj patil-suraj deleted the doctest-bart branch March 9, 2022 19:30

>>> end_scores = outputs.end_logits
>>> list(end_scores.shape)
{expected_output}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> loss = outputs.loss
>>> round(loss.item(), 2)
{expected_loss}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went over this PR too quickly :-/ Why didn't we check for the actual answer of the model here?

The model is a QA model -> it's not very descriptive to just test for the output shape

>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logits
>>> list(logits.shape)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here -> we should test that the logits give a sensible answer

>>> labels = torch.tensor([[1, 1]], dtype=torch.float) # need dtype=float for BCEWithLogitsLoss
>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logits
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants