-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tracker: logically bound app images #128
Comments
In the automotive world we often think of containers as two possible things. Either they come with the system, and are updated atomically with it, or they are separately installed. They way we expect this to work is for the system ones to be installed in a separate image store that is part of the ostree image. And then the "regular" containers will just be stored in /var/lib/container. The automotive sig manifests ship a storage.conf that has:
Then we install containers in the image with osbuild like:
|
This was part of the driver for the need for composefs to be able to contain overlayfs base dirs (overlay nesting). Although that is less important if container/storage also uses composefs. |
I love the idea of additonal stores for this. |
Quadlet supports The Cc: @ygalblum |
So IMO this issue is exactly about having |
I understand that, and I merely pointed out how we currently do it in automotive, not how it would be done with bootc. Instead, what I propose is essentially: Dockerfile:
my-app.container:
And then you have an osbuild manifest that just deploys the above image like any normal image. Of course, instead of open-coding the commands like this, a tool could do the right thing automatically. You might also want the tool to tweak the image name in the quadlet to contain the actual digest so we know that the exact right image version is used every time. |
Its also interesting to reflect on the composefs efficiency in a setup like this. If we use composefs for the final ostree image, we will get perfect content sharing, even if each of the individual additional-image-stores use its own composefs objects dir. Even if no effort is made to try to share object files between image store directories. Because all the files will eventually be deduplicated as part of the full ostree composefs image. In fact, we will even deduplicate files between image stores that use the traditional overlayfs or vfs container store formats. |
In fact, maybe using vfs backend is the right approach here? It is a highly stable on-disk format, and its going to be very efficient to start such a container. And we can ignore all the storage inefficiencies, because they are taken care off by the outer composefs image. |
Just wanted to note that |
I wonder if we should tweak the base images to have a standardized /usr location for additional image store images. |
/usr/lib/containers/storage? |
@rhatdan Yeah, that sounds good to me. Can we perhaps just add it alwas to our /usr/share/containers/storage.conf file? |
You want that in the default storage.conf in containers/storage? |
If you setup an empty additionalstore you need to precreate the directories and lock files. This is what we are doing to setup an empty AdditonalStore. We should fix this in containers/storage to create these files and directories if they do not exists.
|
@rhatdan Would it maybe be possible instead to have containers/storage fail gracefully when the directory doesn't exist? |
Yes that is the way it should work. If I have time I will look at it. Basically ignore the storage if it is empty. |
Actually I just tried it out, as long as the additional image store directory exists, the store seems to work. No need for those additonal files and directories. |
Additional store directory is empty
So podman will write to the empty directory and create
So podman will write to the empty directory and create the missing content. If the file system is read-only it fails.
|
So, I've been thinking about the details around this for a while. In particular about the best storage for these additional image directories. The natural approach would be to use the overlay backend, as we can then use overlay mounts for the actual container, but this has some issues. First of all, historically, ostree doesn't support whiteout files. This has been recently fixed, although even that fix requires adding custom options to ostree. In addition, if ostree is using composefs, there are some issues with encoding both the whiteouts as well as the overlayfs xattrs in the image. These are solved by the overlay xattr escape support I have added in the most recent kernel, although we don't yet have that backported into the CS9 kernel. However, I wonder if using overlay directories for the additional image dir is even the right approach? All the files in the additional image dir will anyway be deduplicated by ostree, so maybe it would be better if we used an approach more like the vfs backend, where each layers is completely squashed (and then we rely on the wrapping ostree to de-duplicate these). Such a layer would be faster to setup and use (since it is shallower), and fix all the issues regarding whiteouts and overlay xattrs. I see two approaches for this:
Opinions? |
One other thing to ponder here is related to #518 Basically if you look at this from a spec/status perspective; we effectively have a clear spec that is readable by external tooling: the "symlink farm". It's not reflected in What we don't directly have is status; while I think we'll end up doing the Perhaps the status is just a boolean I also wonder if we may need an explicit verb to re-synchronize in the case of a |
@vrothberg can we dig in a bit into the high level design here of whether this should use additionalstores or not? In the current code, it doesn't. I see pros and cons to both approaches. One way to think about this is I see a continuum between "floating" "logically bound" and "physically bound". With "physically bound" is that the images are officially read-only, However...IMO, for logically bound images I can see people also wanting to do dynamic updates to them apart from a Take e.g. an OpenShift control plane node with etcd. We want etcd always there by default - but it's also totally sane and valid to rev etcd for a hotfix apart from updating the host. The images being in the "default mutable /var/lib/containers" storage makes the use case of dynamic updates work pretty seamlessly I believe, whereas with a separate additional store I think introduces some confusion/friction there. The choice of an additional store for logically bound is pretty consequential though and so I think it makes sense to try to figure out now. Tangential, but if we do choose to use an additional store, I think we should put it under |
If you do an update on an image in the primary store, it will use the tag in the primary store. Example, If I had alpine:latest in additional store and did a podman pull alpine and downloaded a different image into the primary store, then podman images and all tools would use the alpine:latest in the primary store. This could be an issue if later we pulled an image into the bootc image additional store that is newer then the alpine in the primary store. Bottom line for now we could just indicate in the .Image and .Container files to use an addional store to protect the images. But this would force the quadlets to always use the images in the additionalstore. The big advantage of the additional store, is we have it now, and do not need to wait for some future podman release. |
@rhatdan It's a bit unclear to me, are you arguing for or against using an additionalstore by default for logically bound images? (And does the answer depend on "short term" vs "medium term"?) |
I am giving point/counter point. I don't think we necessarily want bootc to force an additional store, but we might want to take advantage of one in RHEL AI. A lot of this is talking out loud. But I think we could just use standard stores and tell users "don't do that" if they attempt to do a podman image prune, bootc or starting a quadlet would pull the image. |
As Dan mentioned, if you have image A in an additional store and force-pull it a newer one, it will be pulled into the primary store. The primary will always take precedence over additional stores when looking up local images.
I think we can always construct a situation where the user may do something they shouldn't do. We cannot protect against that. I think additional stores are the way to go as they were designed with this use case (read-only images) in mind. For Quadlets in general I see benefits of using additional stores as it's one more protection from the user accidentally removing an image. |
OK. I'm increasingly convinced, however the basic mechanics of wiring this up are going to be somewhat nontrivial. |
🆕 #659 landed with a very MVP functionality; however I think we should have basic docs and tests next. Beyond "absolute MVP" functionality here is things like:
|
Can you also change the usr/lib/bootc-experimental/bound-images.d directory name to just /usr/lib/bootc/nound-images.d? |
IOW you want the image to be not experimental and maintained ~forever? |
I want the concept to be managed forever. If RHEL AI uses it, We need it for the next X years. If you change the format of the files, I don't care. but starting out with saying something is experimental in the RHEL world should be a non-starter. |
We're already shipping things classified as experimental (xref #690 ); I think it's an essential way to get feedback without committing to an interface immediately. As far as stability, in theory we could allow usage of an experimental interface, but we just need to keep it around as long as the known consumers use it. That all said, OK...the feature as is today is sufficiently small that perhaps it can just be stable to start for the next release. |
I don't care if you document something as experimental, but putting it into the file system, makes it difficult to transition, when it is no longer experiemental. I just want the directory renamed. |
This was changed in #714 |
OK, the more I play with this the more I am coming to the conclusion it makes sense to put bound images in the "bootc storage". Which...doesn't yet exist, but should. I will write up a separate issue. EDIT: done in #721 |
So...an interesting semantic with logically bound images as they exist today (writing to the default shared But...when that does happen, the updated bound image is immediately visible. Some people will want to invoke e.g. It would hence feel more predictable to me if we made logically bound images default to only appearing in their referenced root. It is more likely that we can implement that on top of #721 but it's still quite nontrivial. OTOH...as I said in some other place I can actually see it being quite useful for users to pre-update logically bound images (can I acronym as LBI? just here?) ok yes LBIs outside of the default host update lifecycle. But if we go that path...it seems certainly far cleaner to offer an explicit Or...of course alternatively, |
@ckyrouac opinions on ⬆️ ? |
Maybe we just for now strongly discourage floating tags for LBIs, and document the semantic that they will only update when the host changes. |
Something I also am realizing related to this is that There's a lot of advantages to that, but it would be Hard to do in a Containerfile flow today without going all the way to something like FROM oci-archive. |
I think this makes the most sense. I haven't had a chance to look closely at your draft PR to use an additional store for bound images, but this is how I expect it to work. e.g. when upgrading a bootc system that has a new bound-image, we would pull the bound-image into the staged root's storage. This seems to make the most sense if the additional store will be in /usr which is not supposed to change. Since we'll no longer be using the shared storage, I think we'll need to first check the booted root for the image and copy it to the staged root to avoid re-downloading it, or something to avoid re-downloading the image every upgrade. That doesn't address the issue of how to handle floating tags though. I'm not really sure how we can make binding to |
Logically bound images
Current documentation: https://containers.github.io/bootc/experimental-logically-bound-images.html
Original feature proposal text:
We should support a mechanism where some container images are "lifecycle bound" to the base bootc image.
A common advantage/disadvantage of the below is that the user must manage multiple container images for system installs - e.g. for a disconnected/offline install they must all be mirrored, not just one.
In this model, the app images would only be referenced from the base image as
.image
files or an equivalent.This contrasts with physically bound images.
bootc logically bound flow
bootc upgrade
follows a flow like:Current design: symlink to
.image
or.container
filesIntroduce
/usr/lib/bootc/bound-images.d
that is symlinks to.image
files or.container
files.Pros:
:sha256
digest in one place to updateCons:
.image
file is intended to pull images not be parsed by an external tool for a separate purpose.Note: we expect the
.image
files to reference images by digest or immutable tag. There is no mechanism to pull images out of band.Other alternatives considered
New custom config file
A new TOML
/usr/lib/bootc/bound-images.d
, of the form e.g.01-myimages.toml
:Pros:
.image
fileCons:
.image
files:sha256
digest in two places to update in general (both in a.container
or.image
and the custom.toml
here)Parse existing
.image
filesPros:
Cons:
bootc=bound
or equivalent opt-inWhat would happen under the covers here is that bootc would hook into podman and:
bootc upgrade
TODO:
bootc install to-filesystem
- simple scenario w/out pull secret?The text was updated successfully, but these errors were encountered: