Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.32.0 resolving_with_many_equivalent_backtracking fails on mips #6491

Closed
infinity0 opened this issue Dec 27, 2018 · 6 comments · Fixed by #6596
Closed

0.32.0 resolving_with_many_equivalent_backtracking fails on mips #6491

infinity0 opened this issue Dec 27, 2018 · 6 comments · Fixed by #6596
Labels
A-dependency-resolution Area: dependency resolution and the resolver A-testing-cargo-itself Area: cargo's tests C-bug Category: bug

Comments

@infinity0
Copy link
Contributor

failures:

---- resolve::resolving_with_many_equivalent_backtracking stdout ----
thread 'resolve::resolving_with_many_equivalent_backtracking' panicked at 'assertion failed: start.elapsed() < Duration::from_secs(60)', tests/testsuite/support/resolver.rs:121:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
   1: std::sys_common::backtrace::print
   2: std::panicking::default_hook::{{closure}}
   3: std::panicking::default_hook
   4: std::panicking::rust_panic_with_hook
   5: std::panicking::begin_panic
             at libstd/panicking.rs:411
   6: testsuite::support::resolver::resolve_with_config_raw
             at tests/testsuite/support/resolver.rs:121
   7: testsuite::support::resolver::resolve_with_config
             at tests/testsuite/support/resolver.rs:64
   8: testsuite::support::resolver::resolve
             at tests/testsuite/support/resolver.rs:26
   9: testsuite::resolve::resolving_with_many_equivalent_backtracking
             at tests/testsuite/resolve.rs:630
  10: testsuite::resolve::resolving_with_many_equivalent_backtracking::{{closure}}
             at tests/testsuite/resolve.rs:586
  11: core::ops::function::FnOnce::call_once
             at libcore/ops/function.rs:238


failures:
    resolve::resolving_with_many_equivalent_backtracking

test result: FAILED. 1448 passed; 1 failed; 1 ignored; 0 measured; 0 filtered out
@infinity0 infinity0 added the C-bug Category: bug label Dec 27, 2018
@Eh2406 Eh2406 added A-dependency-resolution Area: dependency resolution and the resolver A-testing-cargo-itself Area: cargo's tests labels Dec 27, 2018
@Eh2406
Copy link
Contributor

Eh2406 commented Dec 27, 2018

Is there some way for me to experiment on that system? Does it happen reliably or only sometimes? Is it possible that the hardware in question is just very slow?

The assertion that fired is a smoke test, that just checks if the wall time is less than 60 sec. On our CI set up that test at most takes 30 sec. but it is possible that everything is working correctly just slowly.

Sorry this test is giving you trouble.

@infinity0
Copy link
Contributor Author

Ah, thanks very much for the context. Our mips machines (and I think those in general) are indeed pretty slow - 2-3 hours on average to build cargo, vs 15 minutes on amd64. I will experiment with simply increasing the timeout, hopefully these failures go away.

@Eh2406
Copy link
Contributor

Eh2406 commented Dec 27, 2018

Without certain optimizations resolving_with_many_equivalent_backtracking will take days in release on a fast computer. The wall time assertion is mostly so that we get a nice test failed like we got hear instead of the kinds of non response you found at #6490.

I would be up to adding a ENV variable to control the time out, if it fixes your problems.

@infinity0
Copy link
Contributor Author

Bumping up the duration from 60 to 240 seems to have worked, but we get another failure - I see the timeout duration there is 30, so I'll bump it up to 120. Perhaps cargo can adopt these numbers? Alternatively, a nice way to make this "more obvious" for future people is to add a multiplier in each case, so that it can be raised for certain architectures that are mostly represented by slow machines like mips.

@infinity0
Copy link
Contributor Author

For clarity, the full patch is:

--- a/tests/testsuite/support/resolver.rs
+++ b/tests/testsuite/support/resolver.rs
@@ -118,7 +118,7 @@
 
     // The largest test in our suite takes less then 30 sec.
     // So lets fail the test if we have ben running for two long.
-    assert!(start.elapsed() < Duration::from_secs(60));
+    assert!(start.elapsed() < Duration::from_secs(240));
     resolve
 }
 
--- a/tests/testsuite/concurrent.rs
+++ b/tests/testsuite/concurrent.rs
@@ -511,7 +511,7 @@
     }
 
     for _ in 0..n_concurrent_builds {
-        let result = rx.recv_timeout(Duration::from_secs(30)).expect("Deadlock!");
+        let result = rx.recv_timeout(Duration::from_secs(120)).expect("Deadlock!");
         execs().run_output(&result);
     }
 }

@Eh2406
Copy link
Contributor

Eh2406 commented Jan 24, 2019

I like the "multiplier". PR #6596. I also used greped for Duration to try and find if this is the compleat list, but Cargo has a lot of tests so I may not have found them all.

bors added a commit that referenced this issue Jan 24, 2019
Some CI setups are much slower then the equipment used by Cargo itself

This adds a "CARGO_TEST_SLOW_CPU_MULTIPLIER" that increases all the time outs used in the tests, and disables Proptest shrinking on non tty cases.

Closes: #6491
CC: #6490, @infinity0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-dependency-resolution Area: dependency resolution and the resolver A-testing-cargo-itself Area: cargo's tests C-bug Category: bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants