Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Importing OCI Components #1060

Closed
runyontr opened this issue Nov 30, 2022 · 7 comments · Fixed by #1469
Closed

Support Importing OCI Components #1060

runyontr opened this issue Nov 30, 2022 · 7 comments · Fixed by #1469
Assignees
Labels
enhancement ✨ New feature or request oci

Comments

@runyontr
Copy link
Contributor

Is your feature request related to a problem? Please describe.
As a Zarf package developer I would like to be able to import centrally located zartifacts from an OCI registry.

Describe the solution you'd like
Ability to specify an OCI address in the import:

kind: ZarfPackageConfig
metadata:
  name: "Stack v0.1"
  description: "Example of the stack deploying PodInfo"
  version: "###ZARF_PKG_VAR_BIGBANG_VERSION###"

components:
- name: bigbang
  required: true
  import:
    oci:  zartifacts.dev/bigbang:1.47.0

Would perform the same import as though that zarf.yaml for the artifact was local.

@runyontr
Copy link
Contributor Author

What is the difference between how sget and OCI artifacts are accessed? During development on the Zarf Controller there was an issue pulling sget artifacts with flux.

How should we specify the key for signing OCI components?

@runyontr
Copy link
Contributor Author

Some ideas after talking with @jeff-mccoy

OCI Layers

The current implementation of how Zarf packages get uploaded via an OCI object is using cosign upload blob which uploads the entire zarf package as a single layer. While easy to do, it doesn't allow for some optimizations in sharing components since the entire package needs to be pulled in order to extract the component definition being referenced.

When importing a component, zarf pulls the component definition from the imported zarf package and does some adjustment to the definition to make it work in the package (e.g. adjusting paths based on where the current zarf.yaml lives).

Slim Zarf Package

One idea was to have a slim zarf package that just contains the metadata for the component definition, but was not packaged with artifacts, just files and manifests. When importing this component from an OCI object like this, the consumer of the component would have pointers to all the data needed to build its package without the time/complexity of pulling a (potentially multiple GB) artifact repo

Cons of this approach:

  1. The slim zarf package isn't actually functional.
  2. If consuming this in an environment that doesn't have access to the upstream artifacts, the building wont work. E.g. shipping an artifact to an airgap system, and then using that as a component as part of a new package

Leverage OCI Layers more effectively

OCI Layers have a lot of power. There are efficiencies gained by leveraging layers in storing container images by allowing the same changesets to be shipped independent of the entire image itself. There could be comparable efficiencies within zarf packages by leveraging layers, but the implementation of how best to do this is unknown without more exploration:

  1. Ship the zarf.yaml as its own layer (or maybe zarf.yaml + files/manifests?) and images in another layer
    a. Pro: This would allow pulling the needed pieces of the component for importing separately from all the larger artifacts
  2. Have each component be its own layer
    a. This would allow pulling just the one component you need, and providing all the artifacts needed for that component
    b. Unsure how this would allow of the storage efficiencies for image layers that are shared between components
  3. Some other more granular breakup of the zarf package into layers

@Racer159
Copy link
Contributor

Racer159 commented Jan 18, 2023

Something that may make sense to mesh well with also hosting the images on the same registry would be:

  1. (if applicable) each component gets a layer that contains its configuration info (charts/manifests/values)
  2. (if applicable) each component gets a layer that contains its raw data (dataInjections/repos/files)
  3. (if applicable) each image layer is separated out of the images.tar and uploaded
  4. (if applicable) the sboms get their own layer
  5. The root zarf.yaml gets its own layer
  6. (init package only) the seed image/zarf-injector gets its own layer

This would enable component reuse (assuming new manifests/values weren't injected on a compose) and should enable images to exist as actual pull-able images and as parts of a larger zarf package.

The big thing there would be what parts of composed components get overwritten/added to that could make sense to be their own layers - for example datainjections would likely never be overwritten by a downstream component and could take up GBs of space. repos are similar too since they can be GBs in size as well, and again I haven't seen an example of that field changing through composability. Charts may be another "data" piece as well.

@Racer159
Copy link
Contributor

Also for reference the current Zarf tarball breakdown looks like this:

zarf.yaml
seed-image.tar # init-package only
zarf-injector # init-package only
images.tar
sboms
components
+ <component-name>
  + files
  + charts
  + repos
  + manifests
  + data
  + values

@runyontr
Copy link
Contributor Author

Thinking more about some use cases, and I think we'll also need to support pulling from private OCI registries as well. When pulling images from private registries, Zarf leverages the docker credentials file. That could be a comparable paradigm here where the environment used for building the artifact would need to log in with docker to the OCI registry to pull the zarf package

@Racer159
Copy link
Contributor

As another note if we want the BB compose to work we will also need it to be added to mergeComponentOverrides in https://github.com/defenseunicorns/zarf/blob/main/src/pkg/packager/compose.go#L181

Racer159 added a commit that referenced this issue Mar 30, 2023
## Description

When using a zarf.yaml to extend an existing bigbang definition like:

```yaml
components:
  - name: cocowow
    required: true
    import:
      path: ../defense-unicorns-distro
      name: bigbang
    extensions:
      bigbang:
        version: "###ZARF_PKG_VAR_BIGBANG_VERSION###"
        valuesFiles:
        # look at imports and then merging here for composition
        - ../values/authservice.yaml
        - ../values/keycloak.yaml
   ```
Zarf ignores the provided values files in this zarf.yaml

## Related Issue

Relates to #1060 

## Type of change

- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Other (security config, docs update, etc)

## Checklist before merging

- [ ] Test, docs, adr added or updated as needed
- [ ] [Contributor Guide Steps](https://github.com/defenseunicorns/zarf/blob/main/CONTRIBUTING.md#developer-workflow) followed

---------

Signed-off-by: Tom Runyon <tom@defenseunicorns.com>
Co-authored-by: Wayne Starr <Racer159@users.noreply.github.com>
@Racer159 Racer159 modified the milestones: v0.25.x, v0.26-rc Apr 6, 2023
@Racer159 Racer159 modified the milestones: v0.26.0, v0.26.1 Apr 18, 2023
@Racer159
Copy link
Contributor

Moving back to milestone v0.26.0 (m3) to allow more time to refactor.

@Racer159 Racer159 modified the milestones: v0.26 (m2), v0.26 (m3) Apr 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement ✨ New feature or request oci
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants