Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clear k3s docker containers after stop/uninstall #1469

Closed
Lohann opened this issue Feb 29, 2020 · 13 comments
Closed

Clear k3s docker containers after stop/uninstall #1469

Lohann opened this issue Feb 29, 2020 · 13 comments

Comments

@Lohann
Copy link

Lohann commented Feb 29, 2020

Version:

  • k3s: k3s version v1.17.2+k3s1 (cdab19b)
  • Docker: Docker version 19.03.6, build 369ce74a3c
  • Operating System: CentOS Linux 7 (Core) - Linux 3.10.0-1062.12.1.el7.x86_64

Describe the bug
K3s doesn't stop docker containers after run k3s-killall.sh and don't remove the containers after k3s-uninstall.sh

To Reproduce

  1. Start k3s: k3s server --docker or curl -sfL https://get.k3s.io | sh -s - --docker
  2. List k3s docker containers docker ps -a --filter "name=k8s_"
  3. Stop k3s service: k3s-killall.sh or systemctl stop k3s.service
  4. containers are still running docker ps -a --filter "name=k8s_"
  5. Uninstall: k3s k3s-uninstall.sh
  6. containers are still there docker ps -a --filter "name=k8s_"
  7. If you install and start k3s again, new containers are created and the old ones still running.

Expected behavior

  • K3s must stop its containers after run systemctl stop k3s.service or k3s-killall.sh
  • K3s must resume its containers after run systemctl start k3s.service
  • K3s must remove its containers after run k3s-uninstall.sh

If it is the expected behavior, maybe a flag can be provided to the scripts

  • k3s-killall.sh --stop-containers
  • k3s-uninstall.sh --remove-containers

Workaround
I use follow command to delete k3s containers:
docker stop $(docker ps -a -q --filter "name=k8s_") | xargs docker rm

@Lohann Lohann changed the title Clean k3s docker containers after stop/uninstall Clear k3s docker containers after stop/uninstall Feb 29, 2020
@benfairless
Copy link

I am not affiliated with the project but I expect that the reason containers do not get stopped when k3s stops is to prevent kubelet crashes killing containers. AFAIK this is aligned with upstream behaviour.

Putting in a flag on a stop command does sound reasonable though.

@Lohann
Copy link
Author

Lohann commented Mar 1, 2020

@benfairless I agree, my problem is k3s built-in containers (trafik, local-path-provisioner, metrics-server, etc) that are created and accumulated every time after run systemctl stop k3s.service and systemctl start k3s.service.

I'm my opinion if the containers is already created, there's no need to deploy them again after restart the k3s.

@embcla
Copy link

embcla commented Mar 16, 2020

Just seen this: on a DigitalOcean droplet, I stopped the k3s service and the containers are still there

@consideRatio
Copy link
Contributor

I became aware of this because it drained too much resources on my laptop after k3s-uninstall.sh, thank you so much for your work to make this reproducible, comprehensible, and possible to workaround! @Lohann!

@brandond
Copy link
Contributor

brandond commented May 22, 2020

k3s should NOT stop containers when stopping the service. This would break the ability to nondisruptively restart k3s for upgrades. The k3s-killall.sh script is available if you want to kill all the pods after k3s is down; if you want to stop things gracefully you can use kubectl drain to move things off the node before stopping k3s.

@Lohann The pods should not all be duplicated every time you restart k3s. Are you running rootless by any chance? Can you provide docker ps -a output showing the duplicate containers?

@danielefranceschi
Copy link

danielefranceschi commented Oct 6, 2020

/usr/local/bin/k3s-uninstall.sh deletes itself and all the other scripts, and k3s-killall.sh does not stop the docker processes, so pods are killable only via docker cli.

(k3s stable installed via curl'ed script)

@stale
Copy link

stale bot commented Jul 31, 2021

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label Jul 31, 2021
@leoluk
Copy link

leoluk commented Jul 31, 2021

Still a thing

@stale stale bot removed the status/stale label Jul 31, 2021
@gcheron
Copy link

gcheron commented Nov 29, 2021

k3s should NOT stop containers when stopping the service. This would break the ability to nondisruptively restart k3s for upgrades. The k3s-killall.sh script is available if you want to kill all the pods after k3s is down; if you want to stop things gracefully you can use kubectl drain to move things off the node before stopping k3s.

@Lohann The pods should not all be duplicated every time you restart k3s. Are you running rootless by any chance? Can you provide docker ps -a output showing the duplicate containers?

@brandond my standalone server on which my k3s cluster is running has rebooted, usually the service restarts without error, but here it does not and all my docker containers are stopped. What should I do? Should I restart (docker start ...) them one by one?

@embcla
Copy link

embcla commented Nov 29, 2021

@Lohann my standalone server on which my k3s cluster is running has rebooted, usually the service restarts without error, but here it does not and all my docker containers are stopped. What should I do? Should I restart (docker start ...) them one by one?

This is unrelated to the topic above, I suggest you open your own thread.

@stale
Copy link

stale bot commented May 28, 2022

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label May 28, 2022
@stale stale bot closed this as completed Jun 11, 2022
@choikangjae
Copy link

How on earth it is not resolved after this long time?
After I ran /usr/local/bin/k3s-uninstall.sh, there are tons of docker images left still up and running.
Should I really stop and delete all of them by myself?

@brandond
Copy link
Contributor

Should I really stop and delete all of them by myself?

Yes. K3s only cleans up after the things it installs itself - such as the bundled containerd and pods running in the bundled containerd. If you use an external container runtime (either via --container-runtime-endpoint or --docker) you need to clean the pods out of that runtime yourself.

The same is true of other items that can be disabled and replaced with your own selection - for example, if you disable flannel and install a different CNI, that CNI may create files or directories that our uninstall script will not remove.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants