-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: No unnecessary snapshotting #19311
Conversation
Thanks for taking the time to open a PR!
|
Test summaryRun details
View run in Cypress Dashboard ➡️ Flakiness
This comment has been generated by cypress-bot as a result of this project's GitHub integration settings. You can manage this integration in this project's settings in the Cypress Dashboard |
context('implicit assertions', () => { | ||
// https://github.com/cypress-io/cypress/issues/18549 | ||
it('only snapshots once when failing to find DOM elements', (done) => { | ||
sinon.spy(cy, 'createSnapshot') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can work off of the event that cypress fires when it retries in order to count the number of retries that happened
- on the first retry, spy on cy.createSnapshot
- and then make sure that after it fails, there was only a single call to cy.createSnapshot
- and that the retry count was >1 times
that would tell you it took a snapshot on the final “error” state and not during the interstitial retries, and that it did go through the retry loop multiple times
its like... cy.on('retry', ...)
or cy.on('command:retry', ...)
I can't remember but there are other tests that use this event to count things.
that’ll give you a consistent and deterministic way to ensure that it both retried AND only created 1 snapshot and that test should obviously fail if we reverted the fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reworked this into a more generic utility listening to cypress events, and added an event for snapshoting since we didn't already have one.
I also split the test into two pieces, one which precisely pinpoints the problem (a failing DOM assertion is taking three snapshots outside the context of retries, two of them because of ensureElExistence
), and one which demonstrates it more dramatically (these calls because of ensureElExistence
repeat with every retry, resulting in hundreds of snapshots).
packages/driver/cypress/integration/commands/assertions_spec.js
Outdated
Show resolved
Hide resolved
Could be a bit off here on the flow so if my comments seem way off, sorry in advance!
Is this true when retries are set to 0 in the configuration? Does your fix handle this scenario?
// snapshot the DOM since we are bailing so the user can see the state we're in
// when we fail
- if (log) {
- log.snapshot()
- }
const assertions = options.assertions
if (assertions) {
finishAssertions(assertions)
}
+ else if (log) {
+ log.snapshot()
+ }
+ // by default the retries handle will take the snapshot for us
+ const finishAssertions = (assertions, shouldTakeSnapshot = false) => {
- const finishAssertions = (assertions) => {
- return _.each(assertions, (log) => {
+ if (shouldTakeSnapshot) {
log.snapshot() Then updating this case to pass in "yes should take screenshot" |
@@ -1,6 +1,39 @@ | |||
const { assertLogLength } = require('../../support/utils') | |||
const { $, _ } = Cypress | |||
|
|||
const captureCommands = () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not manage / track the number of snapshots taken during a test? I assumed this data was reported to the dashboard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These DOM snapshots are the ones visible when you hover over test steps and it displays how the application looked at that point in time. They're very ephemeral - we only take them in open mode, and they live in the browser running tests, not even sent to the cypress server. We discard the snapshots regularly (if the number of tests run is greater than numTestsKeptInMemory
, whenever we rerun tests, etc).
We didn't even have an internal event for counting them until I added it in this PR (Cypress.action('cy:snapshot', name)
) so this utility could listen for it.
Yeah, the key here is that I'm not doing something new, I'm covering another case and doing the same thing we were already doing in the other location.
or it can return on line 490:
When there are upcoming commands in the queue - when we're verifying assertions like ".should('have.class', 'foo')", then we override the assert function in a way that passes in So to try and put it in English a bit better: In general, when an assertion fails, we want to take a snapshot, so we can show the users what went wrong. But when we're when we're in the middle of That's why what
I'll update the PR description here - I wrote it based on an earlier misunderstanding of the bug. It's what the first test I added, with Retries are not the cause of the bug, they just make it way more obvious. Setting {timeout: 0} in that first test ensures that no retries occur - but if you run that test against develop (or comment out the code additions in this branch), you'll see that three snapshots were occurring even though there were 0 retries.
Yeah, my initial attempt to fix this was in the retries logic - but that's why the first test is so important. It demonstrates the problem outside the context of retries. If we're triple snapshoting without retries, then making sure we only do three snapshots is still not the correct solution - there should only be one! I really should have updated the description before sending this over. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Succinct solution to a deep problem. Thanks for the detailed description in the comment, without that i'd have had no idea what this was doing.
ed7a776
// if we are simply verifying the upcoming | ||
// assertions then do not immediately end or snapshot | ||
// else do | ||
// `verifying` represents whether we're deciding whether or not to resolve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe out of scope, but thoughts on renaming verifying
to something a little more self-explanatory.... idea might be isVerifyingUpcomingAssertions
(the terminology you emphasized in your log description).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd like to save renaming things for a later date. My real goal would be to make a flowchart of the logic and consider a refactor of this area of code - Brian spoke to some regrets that we didn't just re-implement parts of Chai, rather than weaving in and out of 3rd party code and keeping track of state internally to make it behave as we want.
…rms of assertions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- call a method on
testCommands
which returns an immutable command state to assert on in order to prevent those assertions from mutating the command state - try out
chai-match-pattern
to create more flexibility in the assertions of the command state
Both done in latest commit. Turns out we already have |
Mind expanding the User Facing Changelog a bit to expand on what you mean by "churn"? |
Tweaked the wording slightly, but not sure how to make it clearer: taking fewer snapshots means less CPU usage. Creating a copy of the DOM in memory is expensive, doing it less frequently is cheaper. The difference is very noticeable - running the test
revs up the fans on my laptop on develop, and does so much less with this PR branch. |
@BlueWinds Nice job on the changelog update. 👍🏻 I think it is cleared to an end-user what this change means for them. |
Dismissing Brian's review since he paired with Blue on this.
…ert-with-stack * tgriesser/10.0-release/refactor-lifecycle: (50 commits) Remove unused test file update task spec to use correct projectRoot update Fix test Fix test Fix tests update tests fix test correct config path Fix TS resolve conflicts Fixing component & e2e tests build: fix dev process on windows (#19401) fix: `cy.contains()` ignores `<style>` and `<script>` without removing them. (#19424) Fix some tests chore: Fix the broken codeowners automation (#19431) chore: add types for Cypress.session.clearAllSavedSessions (#19412) fix: No unnecessary snapshotting (#19311) chore: Remove pkg/driver @ts-nocheck part 1 (#19353) fix: add CYPRESS_VERIFY_TIMEOUT param and a test for it (#19282) ...
User facing changelog
Bugfix: Prevent unnecessary snapshotting when running default assertions that would unnecessarily increase CPU use in
cypress open
mode and lead to out of memory crashes on certain browsers.Additional details
Ensure that we don't snapshot while verifying implicit assertions, the same way that we don't snapshot when verifying upcoming explicit assertions
How has the user experience changed?
Improved performance, reducing both CPU and memory usage while waiting for assertions to pass. In certain versions of Chromium / Chrome / Electron, this also fixes a serious memory leak for failing tests.
PR Tasks
cypress-documentation
?type definitions
?cypress.schema.json
?