Conversation
This I would try forcing a regular pass and fail results before adding any details to the callbacks, even if they are weird using the |
That's because it is. See this comment for my reasoning. It doesn't impact backwards compatibility at all though, as the passed and failed values are still the same as before.
This approach doesn't really work though. Consider the following: todo('some todo', function(assert) {
assert.ok(true);
assert.ok(false);
});This should pass, but if we try to force the existing pass/fail model it isn't possible. This is because our current set up assumes all the assertions for a test must have the same result (passing) in order to be "good", but for todo tests we assume that a test can pass with assertions having different results (a mix of passing and failing). |
|
I'm resistant with adding more details to the log callback. A passing condition for a .todo is not the same concept as in a .test. The example you gave is a passing condition, and it's weird. What we can do is to not add They might count as skipped tests or we should create a new todo count for them. I believe some reporters rely on the number of failed tests to tell something went wrong. I believe the following would be fine: details: {
...
passed: 42, // Referring to passing .test only
failed: 2, // any failures, including global errors
skipped: 4, // skipped tests, does not count as passing today
todo: 10 // .todo tests, maybe only those that haven't failed, good for reporters
} |
| var expectedFailures = details.failed - details.failedUnexpectedly; | ||
|
|
||
| if ( expectedFailures ) { | ||
| html = html.concat( [ |
There was a problem hiding this comment.
Nit: html.push( …, … ) is slightly better.
|
This is one of my favorite features to land next. I believe the holding point here is on the passedUnexpectedly and failedUnexpectedly. The other comments should be easily covered. @jquery/qunit All your feedback here is really important to support @trentmwillis and I. Regardless the way we decide to move this on, we need to land a strong API as we won't be able to remove it easily later. |
|
Which part of the API needs to be discussed? |
|
following the discussion above whether we wanna roll with I am not convinced it is good, we could solve it with the existing details' properties. If the team believes these new properties are good, I'm gonna accept them. Trent is doing a great work and I feel bad for hold this for long, that's why I called for more feedback. |
Is this what's currently implemented? What's the alternative? This? |
|
There's a log test for it: QUnit.todo( module2Test5.name, function( assert ) {
assert.ok( false, "expected failure" );
assert.ok( true, "unexpected pass" );
} );
...
assert.deepEqual( testDoneContext, {
...
failed: 1,
passed: 1,
total: 2,
failedUnexpectedly: 0,
passedUnexpectedly: 1,
...
}, "testDone context" );
... |
|
With @trentmwillis's approach, a todo test will report like a regular test in the amount of failed and passed assertions. Reviewing this I'm probably changing my previous request with the following statements:
This would be helpful to improve the logic on any reporter, including the built-in html replacing this: bad = details.failedUnexpectedly !== 0 || (details.todo && details.passedUnexpectedly === details.total);Bringing this logic to the test result, we would assert that same log as: assert.deepEqual( testDoneContext, {
...
failed: 1,
passed: 1,
total: 2,
- failedUnexpectedly: 0,
- passedUnexpectedly: 1,
+ result: true,
...
}, "testDone context" );That would require less logic from reporters and keep the failed/passed in the same way. |
|
Adding a boolean property to indicate if the test passed or not seems fine to me, though it doesn't map well to the In this PR, there's also a |
|
@jzaefferer I'm not familiar with the maturity level of the js-reporters spec, so forgive me if this is a dumb question. Could we augment the status enum to also include "pending"/"todo" (and perhaps, "ready" or "implemented" for those cases when all assertions pass)? My personal preference would be for the assertions to be represented as passing/failing as normal (i.e., don't invert their meaning, don't create passedUnexpectedly/failedUnexpectedly assertion arrays). Instead, the semantics would be that the overall test status would be something besides passing, to indicate the pending/todo nature of the test. I know I'm late to the party, though, so if there is any prior discussion I should be aware of, please feel free to link me to that and I will catch up. Thanks! |
@jzaefferer I think this can be mapped also on to the
@platinumazure I would prefer |
|
@flore77 👍 I'm not attached to any particular name for this new status-- I just want a new status, so we don't have to muck around with the assertion count properties and can continue to have "passed" mean "passed", and "failed" mean "failed". |
|
Wouldn't you then need both |
|
@jzaefferer yes, or since it is a special type of test, the reporter can check the So I think it's better only by looking at the status and to know if it is a fail or a pass, so something like |
|
@trentmwillis I'll be in CA this week for the TC39 meeting, but I'll have some free time this Friday and I would like to spend it on some work force to get this done. I'm looking forward to see this landing, it's a great work and an anticipated feature. |
|
I just looked over this again, and found nothing that hadn't already been commented on. Per #1006 (comment) , though, we should explicitly decide whether the new details properties should quantify unexpected passes & failures (e.g., a failing non-todo test reports |
|
Sorry for letting this sit for so long. Work got busy and then I didn't feel like reading through all the comments haha. Anyway, now that I am caught up and have thought about the issue some more. I think most of the confusion comes from the fact that I was trying to get the assertion counts to communicate what was happening, but in reality, we should be concerned with the test counts. In other words, we care if a test has failed and not if an individual assertion has failed. With that in mind, I think an approach similar to @leobalter outlined here makes the most sense. I'll work on updating this branch with that approach and also will submit a PR to the js-reporters repo to discuss how I think it can integrate there. |
As discussed in #787.
Will need updates to the test runner in order to not fail on CI, but figured I would get feedback on the implementation/API first.
One potential issue to note: In the aggregate cases (e.g.,
moduleDone/done) it is not possible to tell if atodotest has "failed". This case is identified at the individual test level asdetails.total === details.passedUnexpectedlybut we have no knowledge of that in the aggregate scenarios. Not sure if this needs to be addressed as it is easily worked around by tracking if you have had any failures at the test-level (as is done here).