Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Regression in detectOpenHandles for DNSCHANNEL #14788

Closed
stephenh opened this issue Dec 27, 2023 · 4 comments · Fixed by #14789
Closed

[Bug]: Regression in detectOpenHandles for DNSCHANNEL #14788

stephenh opened this issue Dec 27, 2023 · 4 comments · Fixed by #14789

Comments

@stephenh
Copy link

Version

v30.0.0-alpha.2

Steps to reproduce

I'm using knex in a project that gets false positives for open handles, but it can be reproduced with this test:

import { promises } from "dns";

const { Resolver } = promises;
const resolver = new Resolver();
const servers = resolver.getServers();
console.log(servers);

describe("foo", () => {
  it("works", () => {});
});

Expected behavior

I expected jest --detectOpenHandles to not report anything, because the above code doesn't hang jest, it does actually exit cleanly.

Actual behavior

Jest does exit cleanly, but shows an error about an open handle:

 PASS  src/foo.test.ts (31 MB heap size)
  foo
    ✓ works (1 ms)

Test Suites: 1 passed, 1 total
Tests:       1 passed, 1 total
Snapshots:   0 total
Time:        0.094 s, estimated 1 s
Ran all test suites matching foo.test.ts.

Jest has detected the following 1 open handle potentially keeping Jest from exiting:

  ●  DNSCHANNEL

      3 | const { Resolver } = promises;
      4 | const resolver = new Resolver();
    > 5 | const servers = resolver.getServers();
        |              ^
      6 | console.log(servers);
      7 |
      8 | describe("foo", () => {

Additional context

It looks like the DNSCHANNEL false positive was originally fixed here:

https://github.com/jestjs/jest/pull/11470/files

But then a refactor of detectOpenHandles (to dedupe stack traces) regressed the behavior (by removing the check as a potentially speculative clean up separate from deduping the stack traces?):

https://github.com/jestjs/jest/pull/13417/files

My reproduction is really just a copy/paste of #11470 's reproduction from #9982.

From the conversation in https://github.com/jestjs/jest/pull/13417/files#r990874803 , it kinda seems like maybe @liuxingbaoyu didn't understand why these types were being ignored, and more than just DNSCHANNEL needs to be added back to the "if type then early return" check?

Also cc @Mr0grog who fixed the original DNSCHANNEL false positive.

Environment

System:
    OS: Linux 6.2 Ubuntu 23.04 23.04 (Lunar Lobster)
    CPU: (24) x64 12th Gen Intel(R) Core(TM) i9-12900K
  Binaries:
    Node: 20.6.1 - ~/.asdf/installs/nodejs/20.6.1/bin/node
    Yarn: 3.6.3 - ~/.asdf/installs/nodejs/20.6.1/bin/yarn
    npm: 10.2.1 - ~/other/joist/node_modules/.bin/npm
    bun: 1.0.4 - ~/.bun/bin/bun
stephenh added a commit to joist-orm/joist-orm that referenced this issue Dec 27, 2023
stephenh added a commit to joist-orm/joist-orm that referenced this issue Dec 27, 2023
* fix: Bump deps.

* Use jest-matchers-object lookup.

* Bump Jest for prettier v3/mjs support.

* Use toThrow.

* Fix outputDirectory.

* Add comment.

* Disable detectOpenHandles due to false positive.

jestjs/jest#14788

* Re-format.
@liuxingbaoyu
Copy link
Contributor

liuxingbaoyu commented Dec 28, 2023

https://github.com/jestjs/jest/pull/13417/files#r991756357
Actually the difference lies here.

Some people use detectOpenHandles to detect resources that have not been cleaned up, while most people use it to detect what is preventing the process from exiting.
My initial PR was for the former.
Now since I realize that gc() doesn't collect them 100% of the time, and this causes behavior changes, I will open a PR later to restore this.

By the way, if you remove the reference to servers, it will not be reported. (But this doesn't seem to be 100% true, although I learned from the nodejs repository that v8.writeHeapSnapshot() is more aggressive than gc())

import { promises } from "dns";

const { Resolver } = promises;
const resolver = new Resolver();
let servers = resolver.getServers();
console.log(servers);
servers = null;

describe("foo", () => {
   it("works", () => {});
});

@Mr0grog
Copy link
Contributor

Mr0grog commented Dec 28, 2023

But then a refactor of detectOpenHandles (to dedupe stack traces) regressed the behavior (by removing the check as a potentially speculative clean up separate from deduping the stack traces?): #13417

Oh, yeah, this seems surprising to me, too. That said, I like what that change did to surface multiple open handles from from the same stack trace! While working on #11470 I made a note suggesting something similar, but never had time to get around to it: https://github.com/jestjs/jest/pull/11470/files#r641328430

Some people use detectOpenHandles to detect resources that have not been cleaned up, while most people use it to detect what is preventing the process from exiting.
My initial PR was for the former.

Yeah… I guess I think of its purpose as being specifically about the latter and not about the former, which is why I made the changes in #11470. The --detectOpenHandles docs seem to suggest the same thing:

Attempt to collect and print open handles preventing Jest from exiting cleanly. Use this in cases where you need to use --forceExit in order for Jest to exit to potentially track down the reason.

One of the comments on #13417 suggest the same thing : https://github.com/jestjs/jest/pull/13417/files#r991756357

Now since I realize that gc() doesn't collect them 100% of the time, and this causes behavior changes, I will open a PR later to restore this… I learned from the nodejs repository that v8.writeHeapSnapshot() is more aggressive than gc()

@liuxingbaoyu This is probably not only about GC. In #11429 I added an asyncSleep() call because some resources don’t get cleaned up (not just garbage collected, but not actually released by Node.js internals) until after callbacks from user code run. Since the user code might trigger other things that take some time, it might take a while before some handles that are released as far as JS code is concerned are actually released as far as Node.js internals are concerned.

You shortened that sleep from 100 ms to two sleeps (first 0 ms, then 30 ms). I saw a lot of cases where 0 was not enough (see this change where I realized that: 0fab1e4), but I don’t remember what timings I was typically seeing in my experiments. 30 might be fine, or it might be a bit too short.

(It’s also possible I am remembering something wrong here; this was a long time ago!)

@liuxingbaoyu
Copy link
Contributor

Thank you for your extremely detailed explanation!

This is probably not only about GC. In #11429 I added an asyncSleep() call because some resources don’t get cleaned up (not just garbage collected, but not actually released by Node.js internals) until after callbacks from user code run. Since the user code might trigger other things that take some time, it might take a while before some handles that are released as far as JS code is concerned are actually released as far as Node.js internals are concerned.

You shortened that sleep from 100 ms to two sleeps (first 0 ms, then 30 ms). I saw a lot of cases where 0 was not enough (see this change where I realized that: 0fab1e4), but I don’t remember what timings I was typically seeing in my experiments. 30 might be fine, or it might be a bit too short.

Yes! I forgot about it, some nodejs internal objects need to go through some event loop before they are destroyed.
30ms is just a value I estimate, because now if there are still resources that have not been destroyed after 30ms, it will run a gc.
:)

Copy link

github-actions bot commented Feb 1, 2024

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 1, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants