Skip to content

Core: Implement QUnit.todo#1006

Closed
trentmwillis wants to merge 2 commits intoqunitjs:masterfrom
trentmwillis:todo
Closed

Core: Implement QUnit.todo#1006
trentmwillis wants to merge 2 commits intoqunitjs:masterfrom
trentmwillis:todo

Conversation

@trentmwillis
Copy link
Copy Markdown
Member

As discussed in #787.

Will need updates to the test runner in order to not fail on CI, but figured I would get feedback on the implementation/API first.

One potential issue to note: In the aggregate cases (e.g., moduleDone / done) it is not possible to tell if a todo test has "failed". This case is identified at the individual test level as details.total === details.passedUnexpectedly but we have no knowledge of that in the aggregate scenarios. Not sure if this needs to be addressed as it is easily worked around by tracking if you have had any failures at the test-level (as is done here).

@leobalter
Copy link
Copy Markdown
Member

In the aggregate cases (e.g., moduleDone / done) it is not possible to tell if a todo test has "failed". This case is identified at the individual test level as details.total === details.passedUnexpectedly

This passedUnexpectedly looks like a regular fail condition. The todo API concept is a bit inverted from regular tests, but I believe we should keep what we have for passed and failed.

I would try forcing a regular pass and fail results before adding any details to the callbacks, even if they are weird using the todo method.

@trentmwillis
Copy link
Copy Markdown
Member Author

This passedUnexpectedly looks like a regular fail condition.

That's because it is. See this comment for my reasoning. It doesn't impact backwards compatibility at all though, as the passed and failed values are still the same as before.

I would try forcing a regular pass and fail results before adding any details to the callbacks, even if they are weird using the todo method.

This approach doesn't really work though. Consider the following:

todo('some todo', function(assert) {
  assert.ok(true);
  assert.ok(false);
});

This should pass, but if we try to force the existing pass/fail model it isn't possible. This is because our current set up assumes all the assertions for a test must have the same result (passing) in order to be "good", but for todo tests we assume that a test can pass with assertions having different results (a mix of passing and failing).

@leobalter
Copy link
Copy Markdown
Member

leobalter commented Jul 11, 2016

I'm resistant with adding more details to the log callback.

A passing condition for a .todo is not the same concept as in a .test. The example you gave is a passing condition, and it's weird.

What we can do is to not add .todo runs as passed tests, but adding they to failed only when they fail for their expected behavior (with every expected assertions passing).

They might count as skipped tests or we should create a new todo count for them.

I believe some reporters rely on the number of failed tests to tell something went wrong.

I believe the following would be fine:

details: {
  ...
  passed: 42, // Referring to passing .test only
  failed: 2, // any failures, including global errors
  skipped: 4, // skipped tests, does not count as passing today
  todo: 10 // .todo tests, maybe only those that haven't failed, good for reporters
}

var expectedFailures = details.failed - details.failedUnexpectedly;

if ( expectedFailures ) {
html = html.concat( [
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: html.push( …, … ) is slightly better.

@leobalter
Copy link
Copy Markdown
Member

This is one of my favorite features to land next. I believe the holding point here is on the passedUnexpectedly and failedUnexpectedly. The other comments should be easily covered.

@jquery/qunit All your feedback here is really important to support @trentmwillis and I. Regardless the way we decide to move this on, we need to land a strong API as we won't be able to remove it easily later.

@jzaefferer
Copy link
Copy Markdown
Member

Which part of the API needs to be discussed?

@leobalter
Copy link
Copy Markdown
Member

following the discussion above whether we wanna roll with passedUnexpectedly and failedUnexpectedly.

I am not convinced it is good, we could solve it with the existing details' properties.

If the team believes these new properties are good, I'm gonna accept them. Trent is doing a great work and I feel bad for hold this for long, that's why I called for more feedback.

@jzaefferer
Copy link
Copy Markdown
Member

QUnit.todo("wip", function(assert) {
  assert.ok(false)
})
> testEnd { passed: 0, failed: 0, passedUnexpectedly: 0, failedUnexpectedly: 1 }

Is this what's currently implemented?

What's the alternative? This?

> testEnd { passed: 1, failed: 0 }

@leobalter
Copy link
Copy Markdown
Member

There's a log test for it:

QUnit.todo( module2Test5.name, function( assert ) {
    assert.ok( false, "expected failure" );
    assert.ok( true, "unexpected pass" );
} );

...
    assert.deepEqual( testDoneContext, {
        ...
        failed: 1,
        passed: 1,
        total: 2,
        failedUnexpectedly: 0,
        passedUnexpectedly: 1,
        ...
    }, "testDone context" );
...

@leobalter
Copy link
Copy Markdown
Member

With @trentmwillis's approach, a todo test will report like a regular test in the amount of failed and passed assertions.

Reviewing this I'm probably changing my previous request with the following statements:

  • yes, it makes sense to preserve passed/failed. I've got it.
  • the *Unexpectedly properties are still weird. Maybe: can we expose only a single property with the test result?

This would be helpful to improve the logic on any reporter, including the built-in html replacing this:

bad = details.failedUnexpectedly !== 0 || (details.todo && details.passedUnexpectedly === details.total);

Bringing this logic to the test result, we would assert that same log as:

assert.deepEqual( testDoneContext, {
        ...
        failed: 1,
        passed: 1,
        total: 2,
-        failedUnexpectedly: 0,
-        passedUnexpectedly: 1,
+        result: true,
        ...
    }, "testDone context" );

That would require less logic from reporters and keep the failed/passed in the same way.

@jzaefferer
Copy link
Copy Markdown
Member

Adding a boolean property to indicate if the test passed or not seems fine to me, though it doesn't map well to the Test structure in js-reporters: https://github.com/js-reporters/js-reporters#event-data (scroll down to "Test"). I think the closest equivalent is the status property, which can be "passed", "failed" and "skipped".

In this PR, there's also a todo boolean property, right? I'm not sure if that can be mapped to one of the existing js-reporters Test properties, or if it should be added. @flore77 thoughts?

@platinumazure
Copy link
Copy Markdown
Contributor

platinumazure commented Aug 16, 2016

@jzaefferer I'm not familiar with the maturity level of the js-reporters spec, so forgive me if this is a dumb question. Could we augment the status enum to also include "pending"/"todo" (and perhaps, "ready" or "implemented" for those cases when all assertions pass)?

My personal preference would be for the assertions to be represented as passing/failing as normal (i.e., don't invert their meaning, don't create passedUnexpectedly/failedUnexpectedly assertion arrays). Instead, the semantics would be that the overall test status would be something besides passing, to indicate the pending/todo nature of the test.

I know I'm late to the party, though, so if there is any prior discussion I should be aware of, please feel free to link me to that and I will catch up. Thanks!

@flore77
Copy link
Copy Markdown
Contributor

flore77 commented Aug 16, 2016

here's also a todo boolean property

@jzaefferer I think this can be mapped also on to the status property. From what I have seen I think it is denoting if a test is a todo test. Am I right @trentmwillis or I didn't get it? And how also @platinumazure has suggested, I think we can introduce a new status or more.

include "pending"/"todo"

@platinumazure I would prefer todo or something else, because pending is used in other testing frameworks (Mocha) for skipped tests.

@platinumazure
Copy link
Copy Markdown
Contributor

@flore77 👍 I'm not attached to any particular name for this new status-- I just want a new status, so we don't have to muck around with the assertion count properties and can continue to have "passed" mean "passed", and "failed" mean "failed".

@jzaefferer
Copy link
Copy Markdown
Member

Wouldn't you then need both todo-passed and todo-failed? Unlike skipped, a todo test can still pass/fail.

@flore77
Copy link
Copy Markdown
Contributor

flore77 commented Aug 17, 2016

@jzaefferer yes, or since it is a special type of test, the reporter can check the assertions and errors properties to establish if it is a pass or a fail, but this would require extra logic in the reporter and I don't find any advantage in this.

So I think it's better only by looking at the status and to know if it is a fail or a pass, so something like todo-passed and todo-failed sounds good.

@leobalter
Copy link
Copy Markdown
Member

@trentmwillis I'll be in CA this week for the TC39 meeting, but I'll have some free time this Friday and I would like to spend it on some work force to get this done. I'm looking forward to see this landing, it's a great work and an anticipated feature.

@gibson042
Copy link
Copy Markdown
Member

I just looked over this again, and found nothing that hadn't already been commented on. Per #1006 (comment) , though, we should explicitly decide whether the new details properties should quantify unexpected passes & failures (e.g., a failing non-todo test reports failedUnexpectedly: N [N > 0]), or expected ones (e.g., failedAsExpected: 0 for the same case). I don't strongly favor one over the other, but like @leobalter said, we won't be able to change it later.

@trentmwillis
Copy link
Copy Markdown
Member Author

Sorry for letting this sit for so long. Work got busy and then I didn't feel like reading through all the comments haha.

Anyway, now that I am caught up and have thought about the issue some more. I think most of the confusion comes from the fact that I was trying to get the assertion counts to communicate what was happening, but in reality, we should be concerned with the test counts. In other words, we care if a test has failed and not if an individual assertion has failed.

With that in mind, I think an approach similar to @leobalter outlined here makes the most sense. I'll work on updating this branch with that approach and also will submit a PR to the js-reporters repo to discuss how I think it can integrate there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

7 participants