Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exclude ignore_vcs dirs from ignore_dirs to avoid breaking cookies #17

Closed
jimpurbrick opened this issue Dec 20, 2013 · 9 comments
Closed
Labels

Comments

@jimpurbrick
Copy link

56fcdba introduced a significant performance regression on my system (OS X 10.8.5 (12F45)) causing a since query to take over a minute:

jimp-mbp1:watchman jimp$ time echo '["query", "/Users/jimp/devtools/buck", {"since": "n:buckd", "fields":["name","exists","new"]}]' | watchman --server-encoding=json --no-pretty -j > /dev/null

real 1m0.006s
user 0m0.003s
sys 0m0.002s

The same query using e0762ed on the same machine takes a fraction of a second:

jimp-mbp1:watchman jimp$ time echo '["query", "/Users/jimp/devtools/buck", {"since": "n:buckd", "fields":["name","exists","new"]}]' | watchman --server-encoding=json --no-pretty -j > /dev/null

real 0m0.018s
user 0m0.004s
sys 0m0.003s

The regression seems to occur whether or not I have a .watchmanconfig file in the root of the watched tree.

@wez
Copy link
Contributor

wez commented Dec 20, 2013

The since query is slow? Did you change any files before running the since query, or is it slow as soon as you start it up? What if you query it a few minutes after it started up?

@jimpurbrick
Copy link
Author

With 56fcdba or later the command is slow every time I run it: as soon as I start up and every subsequent time. I haven't tried other queries as this is the only query I need to run, but watchman -v is fast before and after 56fcdba.

@wez
Copy link
Contributor

wez commented Dec 20, 2013

is there anything churning through and changing files in your buck working copy while you're running the since query?

This is what I see with a non-built buck tree:

$ time echo '["query", "/Users/wez/fb/buck", {"since": "n:w00t", "fields":["name","exists","new"]}]' | ~/fb/watchman-pure/watchman --server-encoding=json --no-pretty -j > /dev/null
echo   0.00s user 0.00s system 34% cpu 0.002 total
~/fb/watchman-pure/watchman --server-encoding=json --no-pretty -j > /dev/null  0.00s user 0.00s system 0% cpu 0.656 total

$ time echo '["query", "/Users/wez/fb/buck", {"since": "n:w00t", "fields":["name","exists","new"]}]' | ~/fb/watchman-pure/watchman --server-encoding=json --no-pretty -j > /dev/null
echo   0.00s user 0.00s system 28% cpu 0.002 total
~/fb/watchman-pure/watchman --server-encoding=json --no-pretty -j > /dev/null  0.00s user 0.00s system 28% cpu 0.019 total

The code that changed should only impact how quickly we can translate fsevents items into change nodes.
The only way that I can think of that this might somehow influence the query speed is if there is a continual stream of changes coming in and if there is lock contention here:

watchman/root.c

Line 1821 in 56fcdba

pthread_mutex_lock(&root->fse_mtx);

I'm thinking of putting in:

if (!head) {
  return;
}

ahead of that lock acquisition to avoid locking when we don't need to, but don't know that this will fundamentally change what you're seeing. Also: this wasn't changed in 56fcdba

@jimpurbrick
Copy link
Author

Looks like this does have something to do with .watchmanconfig:

jimp-mbp1:buck jimp$ time echo '["query", "/Users/jimp/devtools/buck", {"since": "n:buckd", "fields":["name","exists","new"]}]' | watchman --server-encoding=json --no-pretty -j > /dev/null

real 0m0.019s
user 0m0.004s
sys 0m0.003s
jimp-mbp1:buck jimp$ mv .watchmanconfig.renamed .watchmanconfig
jimp-mbp1:buck jimp$ watchman shutdown-server
1387569637: tid=2033447296 failed to identify PDU: fill_buffer: No such file or directory
1387569638: tid=2033447296 failed to identify PDU: fill_buffer: Operation timed out
1387569638: tid=2033447296 unable to talk to your watchman!
jimp-mbp1:buck jimp$ time echo '["query", "/Users/jimp/devtools/buck", {"since": "n:buckd", "fields":["name","exists","new"]}]' | watchman --server-encoding=json --no-pretty -j > /dev/null

real 1m1.008s
user 0m0.004s
sys 0m0.003s

(this is with HEAD)

@jimpurbrick
Copy link
Author

This is the contents of my .watchmanconfig:
{
"ignore_dirs": [
".git",
".buckd",
"build",
"buck-out",
"src/com/facebook/buck/cli/quickstart",
"test/com/facebook/buck/cli/testdata",
"test/com/facebook/buck/java/testdata",
"test/com/facebook/buck/junit/testdata",
"test/com/facebook/buck/parser/testdata",
"test/com/facebook/buck/shell/testdata",
"third-party/java/dalvik-jelly-bean"
]
}

@wez
Copy link
Contributor

wez commented Dec 20, 2013

You shouldn't ignore .git as you can break the cookie detection; maybe that's the root of the problem?
I'll dig into this a bit later, but maybe just removing that one line fixes it for you?

@jimpurbrick
Copy link
Author

Looks like that was it, thanks!

@wez
Copy link
Contributor

wez commented Dec 21, 2013

Sounds like watchman should sanity check that you're not ignoring vcs dirs, or specifically, that you're not ignoring the selected cookie dir

@wez
Copy link
Contributor

wez commented Aug 7, 2014

I think the behavior here should be: if a dir is listed in both ignore_vcs and ignore_dirs, ignore_dirs takes precedence and the dir is removed from the effective value of ignore_vcs for the root.

This will allow someone to explicitly ignore a dir and not break the cookie mechanism.

wez added a commit that referenced this issue Aug 8, 2014
Summary:
facebook/buck#170 (comment) refers
to issues watching .svn dirs and requests a way to ignore those.

facebook/buck#170 (comment)
demonstrates that someone got bitten by queries stopping working when
they were experimenting with ignores.

While I don't think ignoring .git or .hg makes a big difference in the
grand scheme (the default ignore_vcs works very well on our large repos,
and noticing VCS activity is an important part of filesystem settle
detection for triggers), it sounds like doing so for .svn is more of an
issue.

What this diff does:

if a dir is listed in both `ignore_dirs` and `ignore_vcs`, we'll take
the `ignore_dirs` placement as definitive, and elide that dir from the
effective value of `ignore_vcs`.

This means that it will no longer be candidate location for the cookie
dir and that queries will continue to work just fine.

Addresses #17

Test Plan: `make integration`

Reviewers: fsilva, jimp, sid0

Reviewed By: sid0

Differential Revision: https://phabricator.fb.com/D1484322
@wez wez closed this as completed Aug 8, 2014
wez added a commit that referenced this issue Apr 26, 2015
Summary:
looking at #96 (comment)

we see a trace like:

```
thread #3: tid = 0x16a58c, 0x000000010438cdc0 watchman`w_ht_del
frame #0: 0x000000010438cdc0 watchman`w_ht_del
frame #1: 0x000000010439aa40 watchman`age_out_dir + 144
frame #2: 0x000000010439aa28 watchman`age_out_dir + 120
frame #3: 0x0000000104398738 watchman`age_out_file + 248
frame #4: 0x000000010439a9ee watchman`age_out_dir + 62
frame #5: 0x0000000104398738 watchman`age_out_file + 248
frame #6: 0x000000010439a9ee watchman`age_out_dir + 62
frame #7: 0x0000000104398738 watchman`age_out_file + 248
frame #8: 0x000000010439a9ee watchman`age_out_dir + 62
frame #9: 0x0000000104398738 watchman`age_out_file + 248
frame #10: 0x000000010439a9ee watchman`age_out_dir + 62
frame #11: 0x0000000104398738 watchman`age_out_file + 248
frame #12: 0x000000010439a9ee watchman`age_out_dir + 62
frame #13: 0x0000000104398738 watchman`age_out_file + 248
frame #14: 0x000000010439a9ee watchman`age_out_dir + 62
frame #15: 0x0000000104398738 watchman`age_out_file + 248
frame #16: 0x000000010439a9ee watchman`age_out_dir + 62
frame #17: 0x0000000104398738 watchman`age_out_file + 248
```

In cases where we have a large number of files that are eligible to age
out across a reasonable number of dirs, we can walk the same portions of
the tree we maintain in memory multiple times.

This diff records the names of the dir nodes for which we would previously
recursively call age_out_dir(); we use a hash to unique the names.

After we have walked all eligible file nodes we then iterate the list
of saved dirs and delete them.

This reduces the complexity of age out processing to a single scan of the file
node list followed by a single scan of the list of dirs that we accumulated in
the file node scan.

Test Plan:
`arc unit tests/integration/age.php` exercises this specifically.  also: `make integration`

a more rigorous test on our www repo:

```
hg up -C master~40000 ; sleep 10 ; hg up -C master ; sleep 10 ; time watchman debug-ageout . 1
hg up -C master~40000 ; sleep 10 ; hg up -C master ; sleep 10 ; time watchman debug-ageout . 1
```

(it's important to run it twice in succession because we had a long
standing bug that we hadn't noticed until now!)

The time portion of this outputs:

```
watchman debug-ageout . 1  0.00s user 0.00s system 2% cpu 0.157 total
```

which is a good bit better than 20+ minutes.

Refs: #96

Reviewers: sid0

Reviewed By: sid0

Differential Revision: https://reviews.facebook.net/D37683
facebook-github-bot pushed a commit that referenced this issue Oct 7, 2022
Summary:
We have seen deadlock running `terminationHandler` -> `hasSubscribers` in 2 threads.
It's unclear which other thread is holding the lock.

To make things easier to debug next time, let's change terminationHandler (and
also main.cpp) to bypass the logging lock and write to stderr directly.

Related threads (all threads in P536343453):

  Thread 11 (LWP 3275661):
  #0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
  #1  0x0000000001cc995b in folly::detail::(anonymous namespace)::nativeFutexWaitImpl (addr=<optimized out>, expected=<optimized out>, absSystemTime=<optimized out>, absSteadyTime=<optimized out>, waitMask=<optimized out>) at fbcode/folly/detail/Futex.cpp:126
  #2  folly::detail::futexWaitImpl (futex=0x89, futex@entry=0x7f1c3ac2ef90, expected=994748889, absSystemTime=absSystemTime@entry=0x0, absSteadyTime=<optimized out>, absSteadyTime@entry=0x0, waitMask=waitMask@entry=1) at fbcode/folly/detail/Futex.cpp:254
  #3  0x0000000001d34bce in folly::detail::futexWait<std::atomic<unsigned int> > (futex=0x7f1c3ac2ef90, expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/detail/__futex__/headers/folly/detail/Futex-inl.h:96
  #4  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever::doWait (this=<optimized out>, futex=..., expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:718
  #5  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::futexWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1184
  #6  0x0000000001cd42b2 in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::yieldWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1151
  #7  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::waitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1109
  #8  0x0000000001e7e14c in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1664
  #9  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1356
  #10 folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lock_shared (this=0x7f1c3ac2ef90) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:495
  #11 std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >::shared_lock (this=<optimized out>, __m=...) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/shared_mutex:727
  #12 0x0000000002d765fd in folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::doLock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>, std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0>, 0> (mutex=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1493
  #13 folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::LockedPtr (this=0x7f1c149f8928, parent=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1272
  #14 folly::SynchronizedBase<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, (folly::detail::SynchronizedMutexLevel)2>::rlock (this=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:229
  #15 watchman::Publisher::hasSubscribers (this=<optimized out>) at fbcode/watchman/PubSub.cpp:117
  #16 0x0000000002eca798 in watchman::Log::log<char const (&) [39], char const*, char const (&) [3]> (this=<optimized out>, level=level@entry=watchman::ABORT, args=..., args=..., args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:42
  #17 0x0000000002ec9ba7 in watchman::log<char const (&) [39], char const*, char const (&) [3]> (level=watchman::ABORT, args=..., args=..., args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:121
  #18 (anonymous namespace)::terminationHandler () at fbcode/watchman/SignalHandler.cpp:159
  #19 0x00007f1c3b0c7b3a in __cxxabiv1::__terminate (handler=<optimized out>) at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:48
  #20 0x00007f1c3b0c7ba5 in std::terminate () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:58
  #21 0x0000000001c38c8b in __clang_call_terminate ()
  #22 0x0000000003284c9e in folly::detail::terminate_with_<std::runtime_error, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&> (args=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/lang/__exception__/headers/folly/lang/Exception.h:93
  #23 0x0000000003281bae in folly::terminate_with<std::runtime_error, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&> (args=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/lang/__exception__/headers/folly/lang/Exception.h:123
  #24 folly::SingletonVault::fireShutdownTimer (this=<optimized out>) at fbcode/folly/Singleton.cpp:499
  #25 0x0000000003281ad9 in folly::(anonymous namespace)::fireShutdownSignalHelper (sigval=...) at fbcode/folly/Singleton.cpp:454
  #26 0x00007f1c3b42b939 in timer_sigev_thread (arg=<optimized out>) at ../sysdeps/unix/sysv/linux/timer_routines.c:55
  #27 0x00007f1c3b41fc0f in start_thread (arg=<optimized out>) at pthread_create.c:434
  #28 0x00007f1c3b4b21dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

  ...

  Thread 1 (LWP 3201992):
  #0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
  #1  0x0000000001cc995b in folly::detail::(anonymous namespace)::nativeFutexWaitImpl (addr=<optimized out>, expected=<optimized out>, absSystemTime=<optimized out>, absSteadyTime=<optimized out>, waitMask=<optimized out>) at fbcode/folly/detail/Futex.cpp:126
  #2  folly::detail::futexWaitImpl (futex=0x89, futex@entry=0x7f1c3ac2ef90, expected=994748889, absSystemTime=absSystemTime@entry=0x0, absSteadyTime=<optimized out>, absSteadyTime@entry=0x0, waitMask=waitMask@entry=1) at fbcode/folly/detail/Futex.cpp:254
  #3  0x0000000001d34bce in folly::detail::futexWait<std::atomic<unsigned int> > (futex=0x7f1c3ac2ef90, expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/detail/__futex__/headers/folly/detail/Futex-inl.h:96
  #4  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever::doWait (this=<optimized out>, futex=..., expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:718
  #5  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::futexWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1184
  #6  0x0000000001cd42b2 in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::yieldWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1151
  #7  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::waitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1109
  #8  0x0000000001e7e14c in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1664
  #9  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1356
  #10 folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lock_shared (this=0x7f1c3ac2ef90) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:495
  #11 std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >::shared_lock (this=<optimized out>, __m=...) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/shared_mutex:727
  #12 0x0000000002d765fd in folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::doLock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>, std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0>, 0> (mutex=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1493
  #13 folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::LockedPtr (this=0x7ffd2d5be968, parent=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1272
  #14 folly::SynchronizedBase<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, (folly::detail::SynchronizedMutexLevel)2>::rlock (this=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:229
  #15 watchman::Publisher::hasSubscribers (this=<optimized out>) at fbcode/watchman/PubSub.cpp:117
  #16 0x0000000002ecac20 in watchman::Log::log<char const (&) [59]> (this=<optimized out>, level=level@entry=watchman::ABORT, args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:42
  #17 0x0000000002ec9b24 in watchman::log<char const (&) [59]> (level=watchman::ABORT, args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:121
  #18 (anonymous namespace)::terminationHandler () at fbcode/watchman/SignalHandler.cpp:165
  #19 0x00007f1c3b0c7b3a in __cxxabiv1::__terminate (handler=<optimized out>) at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:48
  #20 0x00007f1c3b0c7ba5 in std::terminate () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:58
  #21 0x0000000002d8cde1 in std::thread::~thread (this=0x7f1c3ac2ef90) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/std_thread.h:152
  #22 0x00007f1c3b3cc8f8 in __run_exit_handlers (status=1, listp=0x7f1c3b598658 <__exit_funcs>, run_list_atexit=<optimized out>, run_dtors=<optimized out>) at exit.c:113
  #23 0x00007f1c3b3cca0a in __GI_exit (status=<optimized out>) at exit.c:143
  #24 0x00007f1c3b3b165e in __libc_start_call_main (main=0x2d11220 <main(int, char**)>, argc=2, argv=0x7ffd2d5bec78) at ../sysdeps/nptl/libc_start_call_main.h:74
  #25 0x00007f1c3b3b1718 in __libc_start_main_impl (main=0x2d11220 <main(int, char**)>, argc=2, argv=0x7ffd2d5bec78, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffd2d5bec68) at ../csu/libc-start.c:409
  #26 0x0000000002d0e181 in _start () at ../sysdeps/x86_64/start.S:116

Reviewed By: xavierd

Differential Revision: D40166374

fbshipit-source-id: 7017e20234e5e0a9532eb61a63ac49ac0020d443
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants