Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman machine: flaky start #16789

Closed
edsantiago opened this issue Dec 8, 2022 · 12 comments
Closed

podman machine: flaky start #16789

edsantiago opened this issue Dec 8, 2022 · 12 comments
Labels
flakes Flakes from Continuous Integration locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@edsantiago
Copy link
Member

New flake in podman machine start. Sorry, I don't know enough about podman-machine to know what to report. Basically, e2e tests failing:

         Waiting for VM to exit...
[+0213s] Machine "112b4a8a48f5" stopped successfully
         output: Waiting for VM to exit... Machine "112b4a8a48f5" stopped successfully
         /var/tmp/go/src/github.com/containers/podman/bin/podman-remote machine start 112b4a8a48f5 --no-info
         Starting machine "112b4a8a48f5"
         Waiting for VM ...
[+0229s] Mounting volume... /tmp/podman_test1144977279:/tmp/podman_test1144977279
         Error: exit status 255

...followed by a ginkgo "expected 125 to be 0".

podman machine start [It] start simple machine

@edsantiago edsantiago added the flakes Flakes from Continuous Integration label Dec 8, 2022
@vrothberg
Copy link
Member

@ashley-cui @baude FYI

@ashley-cui
Copy link
Member

Initial reaction would be that it's probably SSH related.

@edsantiago
Copy link
Member Author

New symptom?

  list machine: check if running while starting
Migrating machine ""
         Error: checking VM active: unexpected end of JSON input
         NAME           VM TYPE     CREATED        LAST UP        CPUS        MEMORY      DISK SIZE
         7ad7ab2c8b50*  qemu        7 seconds ago  7 seconds ago  1           2.147GB     107.4GB

in f37 aarch64 rootless. Same symptom seems to have happened in November, same setup.

And no, this issue isn't stale, it's just that the quay.io and "happened during" flakes are much more prevalent.

@github-actions
Copy link

github-actions bot commented Mar 3, 2023

A friendly reminder that this issue had no activity for 30 days.

@vrothberg
Copy link
Member

Did it flake recently?

@edsantiago
Copy link
Member Author

Most recently in March:

  • fedora-37-aarch64 : machine podman fedora-37-aarch64 rootless host boltdb

@edsantiago
Copy link
Member Author

Speaking of the King of Rome... it triggered just now

@vrothberg
Copy link
Member

@edsantiago, a lot of work went into making start+stop more robust. Have you seen this flake again recently?

@edsantiago
Copy link
Member Author

Last seen in June.

@vrothberg
Copy link
Member

vrothberg commented Oct 19, 2023

Last seen in June.

That aligns with some of my fixes from June-August addressing issues.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Jan 18, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 18, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
flakes Flakes from Continuous Integration locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

4 participants