Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

9.0.0 removed qunit.log callback #207

Closed
vtintillier opened this issue Jun 10, 2024 · 6 comments
Closed

9.0.0 removed qunit.log callback #207

vtintillier opened this issue Jun 10, 2024 · 6 comments
Assignees
Labels

Comments

@vtintillier
Copy link
Contributor

vtintillier commented Jun 10, 2024

Hello @Krinkle

With the last release, we have 23c08c9 that removes the qunit.log event.

We use it to output each assertion, to get a command line output similar to the browser output. It helps us better understanding when a test fails.

Another side effect is that for a long test with lots of assertions, we had frequent messages in the console, and thus the timeout check at

if (self.__grunt_contrib_qunit_timeout__) {
setTimeout(function checkTimeout() {
if ((performance.now() - lastMessage) > self.__grunt_contrib_qunit_timeout__) {
sendMessage('fail.timeout');
} else {
// Keep checking
setTimeout(checkTimeout, 1000);
}
}, 1000);
}
would have no issue, thanks to the frequent qunit.log event messages. But now, we need to set the timeout option to a duration bigger than our longest individual test, otherwise we get the timeout error:

>> Chrome timed out, possibly due to:
>> - QUnit is not loaded correctly.
>> - A missing QUnit start() call.
>> - Or, a misconfiguration of this task.
>> There was an error with headless chrome
Fatal error: http://localhost:7050/qunit.html

Did I miss an event I can use to replace qunit.log? If not, can we keep that in the bridge? Or is there another way to fix our issue?

Thanks

@vtintillier
Copy link
Contributor Author

(also

When true, this will suppress the default logging for individually failed tests. Customized logging can be performed by listening to and responding to `qunit.log` events.
still references qunit.log)

@Krinkle
Copy link
Contributor

Krinkle commented Jun 10, 2024

@vtintillier I would not expect a test to start failing if it takes the same amount of time but makes fewer assertions for some reason. The control mechanism for this inside QUnit is QUnit.config.testTimeout (global) and assert.timeout (per-test). As such, I believe it is preferred and easier to explain/understand that the qunit-contrib-qunit timeout should be set higher than your highest qunit timeout.

I understand that if you relied on this before, this is an extra change, but I think that is ultimately preferred in the long run. I suspect the main reason for a low bridge timeout is a desire for fast feedback. For example, if most of your tests are fast and don't use assert.timeout() to allow for a higher timeout, then if one of your fast tests get stuck, you want fast feedback from grunt-contrib-qunit.

To achieve that effect, make sure QUnit.config.testTimeout is set to a reasonably low default (in your HTML or bootstrap script). For example, you can set it to the same value that your bridge timeout was set to before. QUnit will make sure to detect those failures quickly and communicate them to your command-line or CI job output. This also has the benefit of making sure your test runs the same both in and outside grunt-contrib-qunit. So that a test doesn't pass when running it in the browser manually but fails in grunt-contrib-qunit.

You can then raise the bridge timeout to something sufficiently high so as to only be concerned about its main purpose, which is e.g. a crashed or non-starting browser.

I hope the above two reasons show why it is seen as positive that the bridge timeout should not be used to detect individually slow tests.

There are two issues that I would like to fix however:

  1. Update docs to not mention qunit.log. Thanks for noticing this.
  2. Unless someone has customised timeout in Gruntfile, we can make grunt-contrib-qunit smarter to check the value of QUnit.config.testTimeout and use this instead after the first begin/testStart message, as it being lower would only cause failures that a developer is most likely to fix by updating their grunt timeout to match, which seems extra work we can simply avoid.

@Krinkle Krinkle self-assigned this Jun 10, 2024
@Krinkle Krinkle added the bug label Jun 10, 2024
@vtintillier
Copy link
Contributor Author

The control mechanism for this inside QUnit is QUnit.config.testTimeout (global) and assert.timeout (per-test). As such, I believe it is preferred and easier to explain/understand that the qunit-contrib-qunit timeout should be set higher than your highest qunit timeout.

Our only timeout for now is only in our Gruntfile.js. I will look at adding timeout configuration in QUnit itself too. My understanding is that you will make changes so that with proper timeouts in QUnit, we would not need to increase the grunt-qunit timeout to a very high value.

With the last release, we have 23c08c9 that removes the qunit.log event.

We use it to output each assertion, to get a command line output similar to the browser output. It helps us better understanding when a test fails.

So is it possible for us to keep our "one log per assertion" output?

@Krinkle
Copy link
Contributor

Krinkle commented Jun 10, 2024

So is it possible for us to keep our "one log per assertion" output?
We use it to output each assertion, to get a command line output similar to the browser output. It helps us better understanding when a test fails.

I have not previously heard of this as a desired outcome. But, I don't mind at all adding it back in a minor release soon. That'd be no problem.

In general, I recommend using the testEnd event when implementing reporters. These are usually frequent enough to get a sense of where the test is. Each test name is expected to be unique and defined in only one place in the suite files. The failing assertion will also have a stack trace with file/line number to find the line directly. Although in the case of a timeout, indeed if you had a single very long test, you would not know which passing assertions had come before unless you opened the HTML interface in a browser.

I do want to mention that QUnit has native support for data providers now. I have often seen that when long tests are written, it is often due to some level of automation or iteration happening inside of that test, which are perhaps arguably separate test cases. If that's the case for you, consider using QUnit.test.each(), which might address this problem for you. This also has the benefit of not having to manually compose assertion messages that incorporate each "case", offers per-case timeouts, offers per-case "re-run" in the browser interface, and more.

Having said that, if you like how you have it, I'm happy to add it back. I removed it on the assumption that viable alternatives exist. When that isn't true, I have no reason to keep it removed. Thanks for reporting this!

Can you share how you use this event, so that I can see how it fits in with the other output, and understand how it looks to developers using it? I might also add a test based on it.

Krinkle added a commit that referenced this issue Jun 11, 2024
* Now as one parameter, matching the QUnit signature, instead of
  spread as positional parameters.
  https://qunitjs.com/api/callbacks/QUnit.log/

* Now with caveats documented in-code, and no longer promoted/recommended
  from the `summaryOnly` option docs.

Ref #207
Krinkle added a commit that referenced this issue Jun 11, 2024
* Now as one parameter, matching the QUnit signature, instead of
  spread as positional parameters.
  https://qunitjs.com/api/callbacks/QUnit.log/

* Now with caveats documented in-code, and no longer promoted/recommended
  from the `summaryOnly` option docs.

Ref #207
@Krinkle Krinkle closed this as completed Jun 11, 2024
@vtintillier
Copy link
Contributor Author

Thanks a lot for adding it back 👍

@vtintillier
Copy link
Contributor Author

I do want to mention that QUnit has native support for data providers now. I have often seen that when long tests are written, it is often due to some level of automation or iteration happening inside of that test, which are perhaps arguably separate test cases. If that's the case for you, consider using QUnit.test.each(), which might address this problem for you. This also has the benefit of not having to manually compose assertion messages that incorporate each "case", offers per-case timeouts, offers per-case "re-run" in the browser interface, and more.

Thank you for pointing me to this. We were implementing this on our own for some test to look like JUnit parameterized tests, but this look much cleaner 👍
And maybe the long tests could be using that as well indeed...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants