Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

models, kubernetes: add new provider-id setting #2192

Merged
merged 2 commits into from
Jun 8, 2022

Conversation

etungsten
Copy link
Contributor

@etungsten etungsten commented Jun 8, 2022

Issue number:
N/A

Description of changes:


    migrations: migrate new 'settings.kubernetes.provider-id'
    
    We added a new setting for configuring kubelet's provider-id option.
    models, kubernetes: add new provider-id setting
    
    This adds a new `settings.kubernetes.provider-id` setting for
    configuring the `providerID` item in kubelet config.

Testing done:
Built and ran metal-k8s-1.22.
The node joins the cluster without problem.
When I updated the settings.kubernetes.provider-id setting, the config gets rendered correctly:

bash-5.1# apiclient set settings.kubernetes.provider-id=tinkerbell://eksa-system/worker1
bash-5.1# cat /etc/kubernetes/kubelet/config        
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
....

providerID: tinkerbell://eksa-system/worker1
resolvConf: "/etc/resolv.conf"
...

Kubelet also successfully restarts and reregisters the node fine:

bash-5.1# systemctl status kubelet
● kubelet.service - Kubelet
     Loaded: loaded (/x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─exec-start.conf
     Active: active (running) since Wed 2022-06-08 02:59:02 UTC; 3min 25s ago
       Docs: https://github.com/kubernetes/kubernetes
    Process: 85750 ExecStartPre=/sbin/iptables -P FORWARD ACCEPT (code=exited, status=0/SUCCESS)
    Process: 85751 ExecStartPre=/usr/bin/host-ctr --containerd-socket=/run/dockershim.sock --namespace=k8s.io pull-image --source=${POD_INFRA_CONTAINER_IMAGE} --registry-config=/etc/host-containers/host-ctr.toml (code=exited, status=0/SUCCESS)
   Main PID: 85763 (kubelet)
      Tasks: 35 (limit: 37730)
     Memory: 61.2M
        CPU: 2.487s
     CGroup: /runtime.slice/kubelet.service
             └─85763 /usr/bin/kubelet --cloud-provider "" --kubeconfig /etc/kubernetes/kubelet/kubeconfig --bootstrap-kubeconfig /etc/kubernetes/kubelet/bootstrap-kubeconfig --config /etc/kubernetes/kubelet/config --container-runtime=remote --container-runtime-endpoint=unix:///run/dockershim.sock --containerd=/run/dockershim.sock --network-plugin cni --root-dir /var/lib/kubelet --cert-dir /var/lib/kubelet/pki --node-ip 10.61.248.114 --node-labels "" --register-with-taints "" --pod-infra-container-image public.ecr.aws/eks-distro/kubernetes/pause:3.5

If I set the setting in userdata:

            [settings.kubernetes]
...
            provider-id = "tinkerbell://eksa-system/worker"

kubelet runs fine and registers the node with providerID as expected:

$ kubectl  --kubeconfig ./cluster-kubeconfigs/br-test-122-eks-a-cluster.kubeconfig get node eksa-node10 -o yaml                                                                                                    
...                                                          
spec:                                                                                                    
  podCIDR: 192.168.9.0/24                                                                                
  podCIDRs:                                                                                              
  - 192.168.9.0/24                                                                                       
  providerID: tinkerbell://eksa-system/worker   

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

@etungsten etungsten changed the title Kubelet provider models, kubernetes: add new provider-id setting Jun 8, 2022
@etungsten etungsten requested review from bcressey and zmrow June 8, 2022 00:34
Copy link
Contributor

@bcressey bcressey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

sources/models/src/lib.rs Show resolved Hide resolved
@etungsten etungsten marked this pull request as ready for review June 8, 2022 03:20
@webern
Copy link
Contributor

webern commented Jun 8, 2022

Would aws-* users want this to be set by early-boot-config to the instance id? Maybe I misunderstand its purpose.

@etungsten
Copy link
Contributor Author

etungsten commented Jun 8, 2022

Would aws-* users want this to be set by early-boot-config to the instance id? Maybe I misunderstand its purpose.

No, this is mostly for nodes operating in a cluster without a cloud-provider (e.g. a baremetal on-prem cluster). Normally cloud-providers (like AWS EKS) would be setting the providerID directly in the node.spec of the node when it initializes. In environments where such cloud-provider does not exist, kubelet has the ability to set the providerID via its config. This is useful when bootstrapping a baremetal node; The kubelet-provided providerID can be used to signal to cluster API controllers the initialization of a given node.

@etungsten
Copy link
Contributor Author

Push above rebases onto develop

This adds a new `settings.kubernetes.provider-id` setting for
configuring the `providerID` item in kubelet config.
We added a new setting for configuring kubelet's provider-id option.
@etungsten
Copy link
Contributor Author

Push above adds kubelet providerID configuration to 1.23's kubelet config.

@etungsten etungsten merged commit 5d69ebc into bottlerocket-os:develop Jun 8, 2022
@etungsten etungsten deleted the kubelet-provider-id branch June 8, 2022 17:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants