Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sometimes starting an uncleanly stopped instance results /var/lib/docker mounted on rootfs #3348

Closed
balopat opened this issue Nov 16, 2018 · 2 comments
Labels
co/kubeadm Issues relating to kubeadm ev/hung-start help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@balopat
Copy link
Contributor

balopat commented Nov 16, 2018

BUG REPORT

Environment:

Minikube version (use minikube version): v0.29.0

  • OS (e.g. from /etc/os-release): MacOSX
  • VM Driver : virtualbox
  • ISO version : v0.29.0

What happened:
The minikube VM was shutdown in an unclean way as my machine restarted.
After that minikube start hangs at "Starting cluster components...".
checked where /var/lib/docker was mounted and it was on rootfs.

$ cd /var/lib/docker
$ df -h .
Filesystem      Size  Used Avail Use% Mounted on
rootfs             0     0     0    - /

What you expected to happen:

Normally /var/lib/docker is mounted on /dev/sda1:

$ cd /var/lib/docker
$ df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        17G  1.3G   14G   9% /var/lib/docker

How to reproduce it (as minimally and precisely as possible):
Not sure. It sometimes happens, and sometimes it doesn't but repro would be probably something like this:

  1. minikube start
  2. unclean shutdown (kill vm)
  3. minikube start

Anything else do we need to know:

Based on my research this is also what's happening when you run minikube start again after minikube is started (#3284). This behavior stops docker from creating new containers due to lack of disk space.

@balopat balopat added kind/bug Categorizes issue or PR as related to a bug. co/kubeadm Issues relating to kubeadm ev/hung-start labels Nov 16, 2018
@tstromberg tstromberg added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jan 23, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@tstromberg
Copy link
Contributor

I'm closing this issue as it hasn't seen activity in awhile, and it's unclear if this issue still exists. If this issue does continue to exist in the most recent release of minikube, please feel free to re-open it.

Thank you for opening the issue!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kubeadm Issues relating to kubeadm ev/hung-start help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

4 participants