Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I use a locally built image for the runner? It keeps trying to pull the image from the registry #3513

Open
linustannnn opened this issue May 12, 2024 · 7 comments
Labels
community Community contribution enhancement New feature or request needs triage Requires review from the maintainers

Comments

@linustannnn
Copy link

linustannnn commented May 12, 2024

What would you like added?

A clear and concise description of what you want to happen.

I want to use my custom image which i built locally. I've checked that the image exists locally using docker ps and I tried to override the imagePullPolicy in the runner-scale-set/values.yaml to Never. But it's still trying to pull images from the registry.

tail values.yaml -n 20
  ##               storageClassName: "local-path"
  ##               resources:
  ##                 requests:
  ##                   storage: 1Gi
  spec:
    containers:
      - name: runner
        imagePullPolicy: Never
        image: github-runner:latest
        command: ["/home/runner/run.sh"]

## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
# controllerServiceAccount:
#   namespace: arc-system
#   name: test-arc-gha-runner-scale-set-controller
docker images
REPOSITORY                                        TAG        IMAGE ID       CREATED         SIZE
github-runner                                     latest     ee0e556e28ce   25 hours ago    1.89GB
docker                                            dind       1feaad25659a   3 days ago      365MB
ghcr.io/actions/actions-runner                    latest     9f541e249fd4   10 days ago     1.14GB
registry.k8s.io/kube-apiserver                    v1.30.0    c42f13656d0b   3 weeks ago     117MB
registry.k8s.io/kube-controller-manager           v1.30.0    c7aad43836fa   3 weeks ago     111MB
registry.k8s.io/kube-scheduler                    v1.30.0    259c8277fcbb   3 weeks ago     62MB
registry.k8s.io/kube-proxy                        v1.30.0    a0bf559e280c   3 weeks ago     84.7MB
ghcr.io/actions/gha-runner-scale-set-controller   0.9.1      9ffa9943ca6b   3 weeks ago     199MB
registry.k8s.io/etcd                              3.5.12-0   3861cfcd7c04   3 months ago    149MB
registry.k8s.io/coredns/coredns                   v1.11.1    cbb01a7bd410   9 months ago    59.8MB
registry.k8s.io/pause                             3.9        e6f181688397   19 months ago   744kB
gcr.io/k8s-minikube/storage-provisioner           v5         6e38f40d628d   3 years ago     31.5MB
kubectl -n arc-runners get pods
NAME                                           READY   STATUS                  RESTARTS   AGE
self-hosted-x64-linux-api-nz55j-runner-2lms2   0/2     Init:ImagePullBackOff   0          18s
self-hosted-x64-linux-api-nz55j-runner-ws72n   0/2     Init:ImagePullBackOff   0          18s
kubectl -n arc-runners describe pod self-hosted-x64-linux-api-nz55j-runner-2lms2 | tail -n 20
    SizeLimit:  <unset>
  kube-api-access-ntn9m:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  2m47s                default-scheduler  Successfully assigned arc-runners/self-hosted-x64-linux-api-nz55j-runner-2lms2 to arc
  Normal   Pulling    78s (x4 over 2m46s)  kubelet            Pulling image "github-runner:latest"
  Warning  Failed     78s (x4 over 2m46s)  kubelet            Failed to pull image "github-runner:latest": Error response from daemon: pull access denied for github-runner, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
  Warning  Failed     78s (x4 over 2m46s)  kubelet            Error: ErrImagePull
  Warning  Failed     67s (x6 over 2m46s)  kubelet            Error: ImagePullBackOff
  Normal   BackOff    52s (x7 over 2m46s)  kubelet            Back-off pulling image "github-runner:latest"

Note: Feature requests to integrate vendor specific cloud tools (e.g. awscli, gcloud-sdk, azure-cli) will likely be rejected as the Runner image aims to be vendor agnostic.

Why is this needed?

A clear and concise description of any alternative solutions or features you've considered.

Additional context

Add any other context or screenshots about the feature request here.

@linustannnn linustannnn added community Community contribution enhancement New feature or request needs triage Requires review from the maintainers labels May 12, 2024
Copy link
Contributor

Hello! Thank you for filing an issue.

The maintainers will triage your issue shortly.

In the meantime, please take a look at the troubleshooting guide for bug reports.

If this is a feature request, please review our contribution guidelines.

@geekflyer
Copy link

you probably have to nest the spec under template. I.e.

template:
  spec:
    containers:
      - name: runner
        imagePullPolicy: Never
        image: github-runner:latest
        command: ["/home/runner/run.sh"]

just a guess as I can't see the rest of your values.yaml

@linustannnn
Copy link
Author

linustannnn commented May 13, 2024

yeah it's nested already:

cat ~/arc-configuration/runner-scale-set/values.yaml | tail -n 100
#     - name: side-car
#       image: example-sidecar

## template is the PodSpec for each runner Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
template:
  ## template.spec will be modified if you change the container mode
  ## with containerMode.type=dind, we will populate the template.spec with following pod spec
  ## template:
  ##   spec:
  ##     initContainers:
  ##     - name: init-dind-externals
  ##       image: ghcr.io/actions/actions-runner:latest
  ##       command: ["cp", "-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
  ##       volumeMounts:
  ##         - name: dind-externals
  ##           mountPath: /home/runner/tmpDir
  ##     containers:
  ##     - name: runner
  ##       image: ghcr.io/actions/actions-runner:latest
  ##       command: ["/home/runner/run.sh"]
  ##       env:
  ##         - name: DOCKER_HOST
  ##           value: unix:///var/run/docker.sock
  ##       volumeMounts:
  ##         - name: work
  ##           mountPath: /home/runner/_work
  ##         - name: dind-sock
  ##           mountPath: /var/run
  ##     - name: dind
  ##       image: docker:dind
  ##       args:
  ##         - dockerd
  ##         - --host=unix:///var/run/docker.sock
  ##         - --group=$(DOCKER_GROUP_GID)
  ##       env:
  ##         - name: DOCKER_GROUP_GID
  ##           value: "123"
  ##       securityContext:
  ##         privileged: true
  ##       volumeMounts:
  ##         - name: work
  ##           mountPath: /home/runner/_work
  ##         - name: dind-sock
  ##           mountPath: /var/run
  ##         - name: dind-externals
  ##           mountPath: /home/runner/externals
  ##     volumes:
  ##     - name: work
  ##       emptyDir: {}
  ##     - name: dind-sock
  ##       emptyDir: {}
  ##     - name: dind-externals
  ##       emptyDir: {}
  ######################################################################################################
  ## with containerMode.type=kubernetes, we will populate the template.spec with following pod spec
  ## template:
  ##   spec:
  ##     containers:
  ##     - name: runner
  ##       image: ghcr.io/actions/actions-runner:latest
  ##       command: ["/home/runner/run.sh"]
  ##       env:
  ##         - name: ACTIONS_RUNNER_CONTAINER_HOOKS
  ##           value: /home/runner/k8s/index.js
  ##         - name: ACTIONS_RUNNER_POD_NAME
  ##           valueFrom:
  ##             fieldRef:
  ##               fieldPath: metadata.name
  ##         - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
  ##           value: "true"
  ##       volumeMounts:
  ##         - name: work
  ##           mountPath: /home/runner/_work
  ##     volumes:
  ##       - name: work
  ##         ephemeral:
  ##           volumeClaimTemplate:
  ##             spec:
  ##               accessModes: [ "ReadWriteOnce" ]
  ##               storageClassName: "local-path"
  ##               resources:
  ##                 requests:
  ##                   storage: 1Gi
  spec:
    containers:
      - name: runner
        imagePullPolicy: Never
        image: github-runner:latest
        command: ["/home/runner/run.sh"]

## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
# controllerServiceAccount:
#   namespace: arc-system
#   name: test-arc-gha-runner-scale-set-controller

and it's still trying to pull the image when it's already locally built

@geekflyer
Copy link

how did you build the image and where did you push it? I just tried this out myself via:

template:
  spec:
     containers:
       - name: runner
          image: my-private-registry.io/my-image

and it works for me.

@hicksjacobp
Copy link

@linustannnn what's your kubernetes implementation?

With minikube, I use minikube image build. I've also had various degrees of success with minikube image load.

@linustannnn
Copy link
Author

linustannnn commented May 15, 2024

yeah i used minikube as well, called minikube docker-env and then docker build . -t github-runner:latest, but i got the error in the original comment. I then used helm install to apply the config. If I pushed it to my private repo and pulled it then it works @geekflyer, but I'm wondering if I can use an image that has been locally built. Does minikube image build work? @hicksjacobp

@rdvansloten
Copy link

rdvansloten commented Jun 15, 2024

@linustannnn I had the same issue with this and it turns out when you start specifiying your own spec block, you need to delete/comment out the containerMode.type/containerMode block. So don't set it to dind or kubernetes, just toss out the entire thing, because it overwrites any custom config with its own template. Very counterinituitive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community Community contribution enhancement New feature or request needs triage Requires review from the maintainers
Projects
None yet
Development

No branches or pull requests

4 participants