-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TimeoutStartSec in podman generate systemd #11618
Comments
@vrothberg PTAL |
The units use sdnotify now. Maybe podman should send |
@Luap99 I think that would be a great solution! The longer startup period is usually only required when pulling images. Can I fiddled around with this a bit more yesterday and found that in my case downloading a 1.2G (uncompressed) homeassistant image took maybe 20 to 30 minutes. The download was rather fast, but the processing until the manifest was written to the image destination and the pull command completed took the vast majority of time. I did some i/o monitoring and tests and found that the used industrial SD card in the Pi4 has R/W of roughly 20 - 40 MB/s. The sysstats showed that CPU utilization was at around 0% while load15 was 10 - 15 and iowait was like 80 to 90%. I'm not entirely sure but I assume that the pull command puts a lot of io pressure on the device / sd card and together with the card being rather slow the system almost halts on io wait. I have two thoughts on this:
|
Woops wrong link. |
@w4tsn can you elaborate on why you think Docker performs better on your pi? Personally, I think that we could make the TimeoutStartSec configurable for users. Pulling images for 20-30 minutes at boot sounds concerning. Using |
@vrothberg hmm. Actually it's more a hunch than a measured fact when I think about it. I'm working with Raspberry Pis using industrial SD cards for a while now and around 2 - 3 years ago we switched over from Raspberry Pi OS to Fedora IoT and podman. Since then I've always experienced that the commit time of retrieving images takes a really long time. Now that I monitored the iowait on podman pull I know for a fact that with those SD Cards it's pretty bad but I didn't double check it by installing Raspberry Pi OS on our current Pis and SD cards (they might have changed over the years) and doing docker pull operations. I'll do this to verify that it is indeed a difference in docker and podman I've tested with Raspberry Pi OS on same hardware (Pi + SD card) and I'm seeing that iowait is around 50% - 75% while system load is at 5 and it takes around 10 - 15 minutes to download the homeassisstant image. That's also quite a lot and I think that we might have switched to more rugged but really slow SD cards in the past. Nevertheless podman seems to take a bit longer and puts higher iowait on the system. Apart from that we don't observe this problem on the CM4 with eMMC, which is significantly faster than the SD cards. While I still think it's useful to control these timeout settings using a proper storage with reasonable read/write performance reduces the problem significantly. On a side note: using quadlet this should already be configurable |
A friendly reminder that this issue had no activity for 30 days. |
@vrothberg @w4tsn What is up with this issue? |
I think of adding a |
I think a |
SGTM |
Add a new flag to set the start timeout for a generated systemd unit. To make naming consistent, add a new --stop-timeout flag as well and let the previous --time map to it. Fixes: containers#11618 Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Add a new flag to set the start timeout for a generated systemd unit. To make naming consistent, add a new --stop-timeout flag as well and let the previous --time map to it. Fixes: containers#11618 Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Add a new flag to set the start timeout for a generated systemd unit. To make naming consistent, add a new --stop-timeout flag as well and let the previous --time map to it. Fixes: containers#11618 Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Add a new flag to set the start timeout for a generated systemd unit. To make naming consistent, add a new --stop-timeout flag as well and let the previous --time map to it. Fixes: containers#11618 Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
/kind feature
Description
I noticed that
podman generate systemd --new
does not add aTimeoutStartSec=
anymore (v3.3.1) leaving it up to the systems defaults configured in e.g./etc/systemd/system.conf
.The default on Fedora Linux seems to be the systemd default of 90s. On low-performance devices, slow storage (SD cards) or in bad network conditions the startup of a container unit can take much more than 90s when first having to pull images. Especially when starting 4 to 6 containers at boot.
On a Raspberry Pi 4 quite overloaded with 6 containers and a boot load5 of 10 (it's usually around 2 to 3 when things have settled) it's impossible to get the containers started because systemd will kill them off while pulling their containers after 90s.
How do you think about this? What is the best practice or experience here?
Should the command include a larger
TimeoutStartSec
as it did in the past? Should it be added via an optional flag or just manually (with maybe a hint/notice in the docs) if a user knows that they want to operate on e.g. a Fedora ARM or IoT on a Raspberry Pi? Maybe also we should aim at setting different defaults on those platforms apart from that?podman version
podman system info
The text was updated successfully, but these errors were encountered: