Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2023 Dependency updates (resolves #32). #33

Merged
merged 11 commits into from
Mar 14, 2023
Merged

Conversation

duhrer
Copy link
Collaborator

@duhrer duhrer commented Jan 17, 2023

See #32 for details.

@duhrer
Copy link
Collaborator Author

duhrer commented Jan 17, 2023

The rollup and functional component tests hang both in CI and in local testing. I will have to look into it tomorrow.

@duhrer
Copy link
Collaborator Author

duhrer commented Jan 18, 2023

I experimented locally and eventually discovered that the major update to rimraf resulted in hanging tests (presumably during the cleanup phase after the first test). I will look at their docs and update if possible, from my memory of other projects, I suspect it's a matter of moving from callbacks to promises.

@duhrer
Copy link
Collaborator Author

duhrer commented Jan 18, 2023

@amb26, if you could cast an eye on the promise-related changes, I would appreciate the help.

});
return rimrafPromise;
var fluidPromise = fluid.promise();
var rimrafPromise = rimraf(path, fluid.copy(rimrafOptions));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any particular reason we can't just return the upstream promise?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was a bit worried that I might have used an infusion convention somewhere (i.e. calling reject), but the tests pass with this streamlined.

error ? promise.reject(error) : promise.resolve();
});
var rimRafPromise = rimraf(resolvedPathToRemove);
rimRafPromise.then(promise.resolve, promise.reject);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we return the upstream promise?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also works streamlined.

@duhrer
Copy link
Collaborator Author

duhrer commented Jan 19, 2023

@amb26, thanks for the quick review. I believe I've addressed your comments.

@amb26
Copy link
Member

amb26 commented Feb 21, 2023

On running the tests in Windows I get the following failure:

e:\Source\gits\fluid-testem>npm run test

> fluid-testem@2.1.15 pretest e:\Source\gits\fluid-testem
> rimraf coverage/* reports/* instrumented/*

Error: Illegal characters in path.
    at pathArg (e:\Source\gits\fluid-testem\node_modules\rimraf\dist\cjs\src\path-arg.js:45:33)
    at e:\Source\gits\fluid-testem\node_modules\rimraf\dist\cjs\src\index.js:34:66
    at Array.map (<anonymous>)
    at e:\Source\gits\fluid-testem\node_modules\rimraf\dist\cjs\src\index.js:34:28
    at main (e:\Source\gits\fluid-testem\node_modules\rimraf\dist\cjs\src\bin.js:134:11)
    at Object.<anonymous> (e:\Source\gits\fluid-testem\node_modules\rimraf\dist\cjs\src\bin.js:143:5)
    at Module._compile (node:internal/modules/cjs/loader:1101:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10)
    at Module.load (node:internal/modules/cjs/loader:981:32)
    at Function.Module._load (node:internal/modules/cjs/loader:822:12) {
  path: 'e:\\Source\\gits\\fluid-testem\\coverage\\*',
  code: 'EINVAL'
}

A quick search suggests it may be caused by this rimraf update?

isaacs/rimraf#251

@duhrer
Copy link
Collaborator Author

duhrer commented Feb 21, 2023

Thanks for finding that issue. We already depend on glob so that we can customise the behaviour of istanbul-lib-instrument, we should be to reuse that to work around the lack of globbing in later versions of rimraf.

@duhrer
Copy link
Collaborator Author

duhrer commented Feb 21, 2023

It seems like I use it directly, probably this predated my work on fluid-glob, I will give that some thought.

@duhrer
Copy link
Collaborator Author

duhrer commented Feb 21, 2023

As we (re)create the reports, coverage, and instrumented directories as part of the tests, we don't particularly need the wildcards. I have removed them, which should allow you to run the tests again.

@amb26
Copy link
Member

amb26 commented Feb 21, 2023

Thanks for that fix, I can now start the test run OK on Windows.

One helpful tweak would be to adjust the testemOptions in harness.js to exclude IE as

    testemOptions: {
        skip: "Safari,PhantomJS,IE"
    },

It may well not be worth tracking down these remaining issues since they may be some idiosyncratic feature of my setup, but I get random and intermittent errors of this form:

13:58:51.051:  jq: FAIL: Module "Test the instrumentation functions." Test name "Testing 'safe rollup' with Testem." - Message: There should be no errors.
13:58:51.053:  jq: Expected: null
13:58:51.054:  jq: Actual: {
    "killed": false,
    "code": 1,
    "signal": null,
    "cmd": "node node_modules/testem/testem.js ci --file tests/rollup-fixtures/testem.js"
}
13:58:51.057:  jq: Source:     at Object.assertEquals (e:\Source\gits\fluid-testem\node_modules\infusion\tests\test-core\jqUnit\js\jqUnit.js:135:19)
14:08:08.445:  jq: Test concluded - Module "Testing coverage detection and reporting..." Test name "Running a suite of tests that results in incomplete coverage
...": 5/6 passed - FAIL
14:10:20.073:  TESTEM ERROR:{
    "killed": false,
    "code": 1,
    "signal": null,
    "cmd": "node ../../node_modules/testem/testem.js ci --file ../testem-fixtures/coverage-fixtures/testem-no-coverage.js"
}
14:10:20.074:  jq: FAIL: Module "Testing coverage detection and reporting..." Test name "Running a suite of tests without test coverage..." - Message: There sho
uld be no errors running testem...
14:10:20.075:  jq: Source:     at pok (e:\Source\gits\fluid-testem\node_modules\infusion\tests\test-core\jqUnit\js\jqUnit.js:110:15)
    at Object.fail (e:\Source\gits\fluid-testem\node_modules\infusion\tests\test-core\jqUnit\js\jqUnit.js:127:13)
    at e:\Source\gits\fluid-testem\tests\js\testem-component-functional-tests.js:35:24
    at ChildProcess.exithandler (node:child_process:404:5)
    at ChildProcess.emit (node:events:394:28)
    at maybeClose (node:internal/child_process:1067:16)
    at Process.ChildProcess._handle.onexit (node:internal/child_process:301:5)
14:10:20.077:  Removing dir 'C:\Users\Bosmon\AppData\Local\Temp\reports-3vgfam94-51'...
14:10:20.079:  Removed reports and coverage from this test run...
14:10:20.086:  jq: Test concluded - Module "Testing coverage detection and reporting..." Test name "Running a suite of tests without test coverage...": 4/5 pass
ed - FAIL

I've done two runs and one time got 80/82 and another time 80/83 which makes me suspect that no tests are actually failing, but there is just some overall assertion for the testem invocation which is failing. I'm guessing the code 1 represents the process exit code but not sure what it signifies.
No problems if you are happy to merge this without investigating since these problems are unlikely to affect others.

@duhrer
Copy link
Collaborator Author

duhrer commented Feb 21, 2023

Happy to exclude IE, do me a favour and include the output of node node_modules/.bin/testem launchers so I make sure I get the spelling (and any variations) right.

I also plan to reboot into Windows and try the tests there, but I don't have IE installed.

@amb26
Copy link
Member

amb26 commented Feb 21, 2023

Happy to exclude IE, do me a favour and include the output of node node_modules/.bin/testem launchers so I make sure I get the spelling (and any variations) right.

Here it is:

Have 7 launchers available; auto-launch info displayed on the right.

Launcher          Type          CI  Dev
----------------  ------------  --  ---
Firefox           browser       x
Headless Firefox  browser       x
Chrome            browser       x
Headless Chrome   browser       x
Edge              browser       x
Headless Edge     browser       x
IE                browser       x

@duhrer
Copy link
Collaborator Author

duhrer commented Feb 21, 2023

Cheers, I think I've tamed that hydra and prevented IE from launching. I am still hitting the bug I reported where Firefox doesn't close after a run, I assume you're running in headless mode?

@duhrer
Copy link
Collaborator Author

duhrer commented Feb 21, 2023

My (Windows 10) machine had problems with Firefox when running in non-headless mode because of this issue I reported against testem, but once I added the environment variable to force "headless" mode, I was able to get three consistent runs with no failures.

Give the new settings a try and we can discuss whether what's here is enough for now. One option I considered is adding support for a browser environment variable, as we did with fluid-webdriver.

@duhrer
Copy link
Collaborator Author

duhrer commented Feb 21, 2023

Or perhaps just a passthrough that ensures that the --launch command-line option is also supported in the tests.

@amb26
Copy link
Member

amb26 commented Feb 22, 2023

Or perhaps just a passthrough that ensures that the --launch command-line option is also supported in the tests.

Not sure I quite follow this.
Does the HEADLESS env variable described in the README also work for Firefox?
I looked at your last commit and didn't see anything that seemed to affect Firefox but I noticed when doing this run that it started in a strange "popunder" mode - as well as having to close it manually, I now also have to bring it into visibility manually.
All the same, for some reason I got a successful test run - any ideas what you did to influence this?

@amb26
Copy link
Member

amb26 commented Feb 22, 2023

Also I now have 81/81 tests passing, I'm puzzled since I didn't see an extra test being added ...

@duhrer
Copy link
Collaborator Author

duhrer commented Feb 23, 2023

Also I now have 81/81 tests passing, I'm puzzled since I didn't see an extra test being added ...

There should be (and are) 81 tests run in a normal run. It is a bit concerning that a run could pass with less, I will have to give that some thought (perhaps for a future hardening pull, as IMO it's more related to a need for more work on the Windows side than to the changes here).

Does the HEADLESS env variable described in the README also work for Firefox?

My "headless" mode predates testem's support for headless browsers, and launches Chrome and Firefox with command-line options that launch them in headless mode. It does try to launch both.

I've been thinking of making a (separate) ticket to remove my headless support in favour of testem's built-in support for headless browsers. I'd make sure to interpret the existing HEADLESS environment variable as a request to launch all the headless browsers (and only those). That way we could better support a range of author intent, i.e. the current HEADLESS mode only works with the browsers that supported it a few years ago, and not the headless version of newer Chrome variants. My idea is to make a minor version release, with a warning and migration instructions for people who make use of headlessBrowserArgs.

I looked at your last commit and didn't see anything that seemed to affect Firefox but I noticed when doing this run that it started in a strange "popunder" mode - as well as having to close it manually, I now also have to bring it into visibility manually.

That suggests to me that you're not running in headless mode. I think I have seen the same thing on Windows 10. It may be down to the --no-remote option we use for Firefox in non-headless mode, I'll have to test that.

All the same, for some reason I got a successful test run - any ideas what you did to influence this?

In my testing, IE would launch without running tests or closing, and would block the test run until you closed the window. The last change I made avoids this, which seems like the most likely explanation at least to me.

Not sure I quite follow this.

My comment about --launch is related to the command-line option that testem itself provides to override the configuration file and only test with a particular browser (there are other options to skip one or more browsers). It would be useful for things like quickly running tests against a single browser, on a setup like mine there are five chrome variants that show up, and I often just want to test one of them when quickly verifying changes. I would make a separate pull to add this support.

@duhrer
Copy link
Collaborator Author

duhrer commented Feb 23, 2023

It seems like the Firefox options don't have any effect on the "popunder" behaviour you describe, the windows are never minimised in my Windows 10 instancs.

Given the general brokenness of Firefox on Windows, I'm inclined to try to clean up both things. As part of that, IMO we might as well also default to headless in this day and age, and avoid a whole range of twitches and focus issues (I'm fairly sure it would also improve the speed).

Anyway, if tests are running consistently, I'd suggest merging this one and I'll begin writing up the follow-up work.

@duhrer
Copy link
Collaborator Author

duhrer commented Mar 2, 2023

Please excuse, I left two (now deleted) comments for another pull, hopefully I beat the notifications.

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants