-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2023 Dependency updates (resolves #32). #33
Conversation
The rollup and functional component tests hang both in CI and in local testing. I will have to look into it tomorrow. |
I experimented locally and eventually discovered that the major update to |
@amb26, if you could cast an eye on the promise-related changes, I would appreciate the help. |
src/js/testem-component.js
Outdated
}); | ||
return rimrafPromise; | ||
var fluidPromise = fluid.promise(); | ||
var rimrafPromise = rimraf(path, fluid.copy(rimrafOptions)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any particular reason we can't just return the upstream promise?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was a bit worried that I might have used an infusion convention somewhere (i.e. calling reject
), but the tests pass with this streamlined.
error ? promise.reject(error) : promise.resolve(); | ||
}); | ||
var rimRafPromise = rimraf(resolvedPathToRemove); | ||
rimRafPromise.then(promise.resolve, promise.reject); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we return the upstream promise?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also works streamlined.
@amb26, thanks for the quick review. I believe I've addressed your comments. |
…oid node@12 warnings.
On running the tests in Windows I get the following failure:
A quick search suggests it may be caused by this rimraf update? |
Thanks for finding that issue. We already depend on |
It seems like I use it directly, probably this predated my work on |
As we (re)create the reports, coverage, and instrumented directories as part of the tests, we don't particularly need the wildcards. I have removed them, which should allow you to run the tests again. |
Thanks for that fix, I can now start the test run OK on Windows. One helpful tweak would be to adjust the testemOptions in harness.js to exclude IE as
It may well not be worth tracking down these remaining issues since they may be some idiosyncratic feature of my setup, but I get random and intermittent errors of this form:
I've done two runs and one time got 80/82 and another time 80/83 which makes me suspect that no tests are actually failing, but there is just some overall assertion for the testem invocation which is failing. I'm guessing the code 1 represents the process exit code but not sure what it signifies. |
Happy to exclude IE, do me a favour and include the output of I also plan to reboot into Windows and try the tests there, but I don't have IE installed. |
Here it is:
|
Cheers, I think I've tamed that hydra and prevented IE from launching. I am still hitting the bug I reported where Firefox doesn't close after a run, I assume you're running in headless mode? |
My (Windows 10) machine had problems with Firefox when running in non-headless mode because of this issue I reported against testem, but once I added the environment variable to force "headless" mode, I was able to get three consistent runs with no failures. Give the new settings a try and we can discuss whether what's here is enough for now. One option I considered is adding support for a browser environment variable, as we did with fluid-webdriver. |
Or perhaps just a passthrough that ensures that the |
Not sure I quite follow this. |
Also I now have 81/81 tests passing, I'm puzzled since I didn't see an extra test being added ... |
There should be (and are) 81 tests run in a normal run. It is a bit concerning that a run could pass with less, I will have to give that some thought (perhaps for a future hardening pull, as IMO it's more related to a need for more work on the Windows side than to the changes here).
My "headless" mode predates testem's support for headless browsers, and launches Chrome and Firefox with command-line options that launch them in headless mode. It does try to launch both. I've been thinking of making a (separate) ticket to remove my headless support in favour of testem's built-in support for headless browsers. I'd make sure to interpret the existing
That suggests to me that you're not running in headless mode. I think I have seen the same thing on Windows 10. It may be down to the
In my testing, IE would launch without running tests or closing, and would block the test run until you closed the window. The last change I made avoids this, which seems like the most likely explanation at least to me.
My comment about |
It seems like the Firefox options don't have any effect on the "popunder" behaviour you describe, the windows are never minimised in my Windows 10 instancs. Given the general brokenness of Firefox on Windows, I'm inclined to try to clean up both things. As part of that, IMO we might as well also default to headless in this day and age, and avoid a whole range of twitches and focus issues (I'm fairly sure it would also improve the speed). Anyway, if tests are running consistently, I'd suggest merging this one and I'll begin writing up the follow-up work. |
Please excuse, I left two (now deleted) comments for another pull, hopefully I beat the notifications. |
See #32 for details.