Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split server setup fails on etcd-only, cp-only config #8672

Closed
ShylajaDevadiga opened this issue Oct 17, 2023 · 4 comments
Closed

Split server setup fails on etcd-only, cp-only config #8672

ShylajaDevadiga opened this issue Oct 17, 2023 · 4 comments
Assignees
Labels
kind/bug Something isn't working status/blocker
Milestone

Comments

@ShylajaDevadiga
Copy link
Contributor

Environmental Info:
K3s Version:
k3s version v1.28.2+k3s-b8dc9553

Node(s) CPU architecture, OS, and Version:
Ubuntu 22.04

Cluster Configuration:
1 etcd-only
1 cp-only
1 agent

Describe the bug:
After the etcd-only node is created, server-only node panics after few seconds of joining the cluster

Steps To Reproduce:

  1. Create a etcd-only node
  2. Join server-only node
  3. Check node status
etcd-only
$ cat /etc/rancher/rke2/config.yaml 
token: secret
disable-apiserver: true
disable-scheduler: true
disable-controller-manager: true
node-taint: "node-role.kubernetes.io/etcd=true:NoExecute"

cp-only
$ cat /etc/rancher/k3s/config.yaml 
token: secret
server: https://3.142.53.3:6443/
disable-etcd: true
node-taint: "node-role.kubernetes.io/control-plane=true:NoSchedule"

$ kubectl get nodes
NAME               STATUS   ROLES                  AGE   VERSION
ip-172-31-13-164   Ready    etcd                   2s    v1.28.2+k3s-b8dc9553
ip-172-31-7-59     Ready    control-plane,master   5s    v1.28.2+k3s-b8dc9553

$ kubectl get nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes)

Expected behavior:
Cluster should be up with etcd-only and cp-only config and be able to join the agent

Actual behavior:
Seeing a panic on the logs after a while

Additional context / logs:

Oct 17 18:12:00 ip-172-31-7-59 k3s[2670]: time="2023-10-17T18:12:00Z" level=info msg="Handling backend connection request [ip-172-31-7-59]"
Oct 17 18:12:00 ip-172-31-7-59 k3s[2670]: time="2023-10-17T18:12:00Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Oct 17 18:12:00 ip-172-31-7-59 k3s[2670]: time="2023-10-17T18:12:00Z" level=info msg="Handling backend connection request [ip-172-31-13-164]"
Oct 17 18:12:05 ip-172-31-7-59 k3s[2670]: time="2023-10-17T18:12:05Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Oct 17 18:12:10 ip-172-31-7-59 k3s[2670]: time="2023-10-17T18:12:10Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Oct 17 18:12:15 ip-172-31-7-59 k3s[2670]: time="2023-10-17T18:12:15Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Oct 17 18:12:16 ip-172-31-7-59 k3s[2670]: F1017 18:12:16.668593    2670 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
Oct 17 18:12:16 ip-172-31-7-59 k3s[2670]: time="2023-10-17T18:12:16Z" level=fatal msg="apiserver panic: F1017 18:12:16.668593    2670 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded\n" stack="goroutine 590437 [running]:\nruntime/debug.Stack()\n\t/usr/local/go/src/runtime/debug/stack.go:24 +0x65\ngithub.com/k3s-io/k3s/pkg/daemons/executor.(*Embedded).APIServer.func1.1()\n\t/go/src/github.com/k3s-io/k3s/pkg/daemons/executor/embed.go:128 +0x39\npanic({0x4e6c0a0, 0xc0563b03d0})\n\t/usr/local/go/src/runtime/panic.go:884 +0x213\nk8s.io/klog/v2.(*loggingT).output(0x95ebf00, 0x3, 0x0, 0xc00050a540, 0x1, {0x76cf480?, 0x1?}, 0x10?, 0x0)\n\t/go/pkg/mod/github.com/k3s-io/klog/v2@v2.100.1-k3s1/klog.go:932 +0x6de\nk8s.io/klog/v2.(*loggingT).printfDepth(0x37e11d600?, 0xf?, 0x0, {0x0, 0x0}, 0x1?, {0x59e4489, 0x19}, {0xc0563b0380, 0x1, ...})\n\t/go/pkg/mod/github.com/k3s-io/klog/v2@v2.100.1-k3s1/klog.go:737 +0x1f8\nk8s.io/klog/v2.(*loggingT).printf(...)\n\t/go/pkg/mod/github.com/k3s-io/klog/v2@v2.100.1-k3s1/klog.go:718\nk8s.io/klog/v2.Fatalf(...)\n\t/go/pkg/mod/github.com/k3s-io/klog/v2@v2.100.1-k3s1/klog.go:1624\nk8s.io/kubernetes/pkg/controlplane.(*Config)
@brandond
Copy link
Member

brandond commented Oct 17, 2023

Similar panic seen on 1.25 https://drone-pr.k3s.io/k3s-io/k3s/7616/3/3

E1016 19:12:25.042209      80 runtime.go:77] Observed a panic: context deadline exceeded
goroutine 141686 [running]:
k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1.1()
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/filters/timeout.go:109 +0x9c
panic({0x4ca4d00, 0x88d3340})
        /usr/local/go/src/runtime/panic.go:884 +0x213
k8s.io/apiextensions-apiserver/pkg/registry/customresource.NewStorage({{_, _}, {_, _}}, {{0xc00e11cca8, 0x13}, {0xc0a547ef00, 0x8}, {0xc0a547f290, 0xc}}, ...)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiextensions-apiserver@v1.25.14-k3s1/pkg/registry/customresource/etcd.go:71 +0x8d7
k8s.io/apiextensions-apiserver/pkg/apiserver.(*crdHandler).getOrCreateServingInfoFor(0xc0027d4420, {0xc0a599c720?, 0x24?}, {0xc0a599c6f0, 0x21})
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiextensions-apiserver@v1.25.14-k3s1/pkg/apiserver/customresource_handler.go:805 +0x1b13
k8s.io/apiextensions-apiserver/pkg/apiserver.(*crdHandler).ServeHTTP(0xc0027d4420, {0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiextensions-apiserver@v1.25.14-k3s1/pkg/apiserver/customresource_handler.go:303 +0x955
k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00a357080, {0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/mux/pathrecorder.go:249 +0x430
k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0a5d765a0?, {0x5d03680?, 0xc0104aa060?}, 0x0?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/mux/pathrecorder.go:235 +0x73
k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x528ea48?, 0x0?}, 0xc001e7fcb0?, 0xc00034b9d0?}, {0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/handler.go:154 +0x6fe
k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0012034c0, {0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/mux/pathrecorder.go:255 +0x588
k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0a5d765a0?, {0x5d03680?, 0xc0104aa060?}, 0x0?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/mux/pathrecorder.go:235 +0x73
k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x5261a72?, 0xc042d66000?}, 0xc002318750?, 0xc000a0e2a0?}, {0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/handler.go:154 +0x6fe
k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc09916af30?, {0x5d03680?, 0xc0104aa060?}, 0x59?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/kube-aggregator@v1.25.14-k3s1/pkg/apiserver/handler_proxy.go:124 +0x172
k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0a20eef00, {0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/mux/pathrecorder.go:249 +0x430
k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0a5d765a0?, {0x5d03680?, 0xc0104aa060?}, 0x4b0248?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/mux/pathrecorder.go:235 +0x73
k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x5266531?, 0x232249f?}, 0xc00704fc20?, 0xc007047420?}, {0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/handler.go:154 +0x6fe
k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5
net/http.HandlerFunc.ServeHTTP(0x5d05610?, {0x5d03680?, 0xc0104aa060?}, 0x4?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filters/authorization.go:64 +0x4f4
net/http.HandlerFunc.ServeHTTP(0x0?, {0x5d03680?, 0xc0104aa060?}, 0x1120406?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178
net/http.HandlerFunc.ServeHTTP(0xc00ce4f320?, {0x5d03680?, 0xc0104aa060?}, 0x9c1e8a?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5
net/http.HandlerFunc.ServeHTTP(0xc0a71dc7c0?, {0x5d03680?, 0xc0104aa060?}, 0xc00f676b40?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func2.9()
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/filters/priority-and-fairness.go:292 +0xf6
k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle.func2()
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/util/flowcontrol/apf_filter.go:195 +0x1e7
k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset.(*request).Finish.func1(0xc00eb56000?, 0xc000be0d18?, 0x410327?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go:368 +0x65
k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset.(*request).Finish(0xc00427da70?, 0xd?)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go:369 +0x45
k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle(0xc00147fe00, {0x5d05610?, 0xc00ce4f530}, {0xc00cb244d0, {0x5d088d0, 0xc0a71dc780}}, 0xc003f96540?, 0x1?, 0x1?, 0xc0aa23a3c0)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/util/flowcontrol/apf_filter.go:183 +0x76b
k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func2({0x5d03680?, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/filters/priority-and-fairness.go:295 +0xcaa
net/http.HandlerFunc.ServeHTTP(0x0?, {0x5d03680?, 0xc0104aa060?}, 0x1120406?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178
net/http.HandlerFunc.ServeHTTP(0xc00ce4f320?, {0x5d03680?, 0xc0104aa060?}, 0x9c1e8a?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5
net/http.HandlerFunc.ServeHTTP(0xab1c5ed5923f82a4?, {0x5d03680?, 0xc0104aa060?}, 0x12835b01d807aa98?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filters/impersonation.go:50 +0x21c
net/http.HandlerFunc.ServeHTTP(0x0?, {0x5d03680?, 0xc0104aa060?}, 0x1120406?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178
net/http.HandlerFunc.ServeHTTP(0xc00ce4f320?, {0x5d03680?, 0xc0104aa060?}, 0x9c1e8a?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5
net/http.HandlerFunc.ServeHTTP(0x0?, {0x5d03680?, 0xc0104aa060?}, 0x1120406?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178
net/http.HandlerFunc.ServeHTTP(0xc00ce4f320?, {0x5d03680?, 0xc0104aa060?}, 0x9c1e8a?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5
net/http.HandlerFunc.ServeHTTP(0x5d05610?, {0x5d03680?, 0xc0104aa060?}, 0x5c8ba18?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filters.withAuthentication.func1({0x5d03680, 0xc0104aa060}, 0xc00c601a00)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filters/authentication.go:105 +0x6af
net/http.HandlerFunc.ServeHTTP(0x5d055d8?, {0x5d03680?, 0xc0104aa060?}, 0x5c95ce8?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5d03680, 0xc0104aa060}, 0xc00c601700)
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/endpoints/filterlatency/filterlatency.go:89 +0x330
net/http.HandlerFunc.ServeHTTP(0xc0996c1ba0?, {0x5d03680?, 0xc0104aa060?}, 0x741692?)
        /usr/local/go/src/net/http/server.go:2122 +0x2f
k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1()
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/filters/timeout.go:114 +0x70
created by k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
        /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver@v1.25.14-k3s1/pkg/server/filters/timeout.go:100 +0x1d8

@brandond
Copy link
Member

Seeing the same thing with modules synced with upstream release/1.28 and using etcd-io/release-3.5 branch.

Oct 17 21:29:23 ip-172-31-10-150 k3s[10671]: time="2023-10-17T21:29:23Z" level=fatal msg="apiserver panic: F1017 21:29:23.119167   10671 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
" stack="goroutine 604361 [running]:
runtime/debug.Stack()
	/usr/local/go/src/runtime/debug/stack.go:24 +0x65
github.com/k3s-io/k3s/pkg/daemons/executor.(*Embedded).APIServer.func1.1()
	/go/src/github.com/k3s-io/k3s/pkg/daemons/executor/embed.go:128 +0x39
panic({0x4e62700, 0xc065bddd10})
	/usr/local/go/src/runtime/panic.go:884 +0x213
k8s.io/klog/v2.(*loggingT).output(0x95d6c20, 0x3, 0x0, 0xc0007a42a0, 0x1, {0x76bf6e2?, 0x1?}, 0x10?, 0x0)
	/go/pkg/mod/github.com/k3s-io/klog/v2@v2.100.1-k3s1/klog.go:932 +0x6de
k8s.io/klog/v2.(*loggingT).printfDepth(0x37e11d600?, 0xf?, 0x0, {0x0, 0x0}, 0x1?, {0x59d7ab3, 0x19}, {0xc065bddcc0, 0x1, ...})
	/go/pkg/mod/github.com/k3s-io/klog/v2@v2.100.1-k3s1/klog.go:737 +0x1f8
k8s.io/klog/v2.(*loggingT).printf(...)
	/go/pkg/mod/github.com/k3s-io/klog/v2@v2.100.1-k3s1/klog.go:718
k8s.io/klog/v2.Fatalf(...)
	/go/pkg/mod/github.com/k3s-io/klog/v2@v2.100.1-k3s1/klog.go:1624
k8s.io/kubernetes/pkg/controlplane.(*Config).createLeaseReconciler(0xc000377080)
	/go/pkg/mod/github.com/k3s-io/kubernetes@v1.28.2-k3s1/pkg/controlplane/instance.go:291 +0x2e5
k8s.io/kubernetes/pkg/controlplane.(*Config).createEndpointReconciler(0xc000377080)
	/go/pkg/mod/github.com/k3s-io/kubernetes@v1.28.2-k3s1/pkg/controlplane/instance.go:304 +0x12d
k8s.io/kubernetes/pkg/controlplane.(*Config).Complete(0xc000377080)
	/go/pkg/mod/github.com/k3s-io/kubernetes@v1.28.2-k3s1/pkg/controlplane/instance.go:354 +0x6fd
k8s.io/kubernetes/cmd/kube-apiserver/app.(*Config).Complete(0xc0549fe660)
	/go/pkg/mod/github.com/k3s-io/kubernetes@v1.28.2-k3s1/cmd/kube-apiserver/app/config.go:61 +0x3f
k8s.io/kubernetes/cmd/kube-apiserver/app.Run({0xc000470c00?}, 0x0?)
	/go/pkg/mod/github.com/k3s-io/kubernetes@v1.28.2-k3s1/cmd/kube-apiserver/app/server.go:164 +0x334
k8s.io/kubernetes/cmd/kube-apiserver/app.NewAPIServerCommand.func2(0xc000924300?, {0xc0006d9b80?, 0x28?, 0x40?})
	/go/pkg/mod/github.com/k3s-io/kubernetes@v1.28.2-k3s1/cmd/kube-apiserver/app/server.go:119 +0xf1
github.com/spf13/cobra.(*Command).execute(0xc000924300, {0xc0003d2800, 0x28, 0x40})
	/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x862
github.com/spf13/cobra.(*Command).ExecuteC(0xc000924300)
	/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(0x0?, {0x6555b08?, 0xc0008f8cd0?})
	/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 +0x4a
github.com/k3s-io/k3s/pkg/daemons/executor.(*Embedded).APIServer.func1()
	/go/src/github.com/k3s-io/k3s/pkg/daemons/executor/embed.go:131 +0x74
created by github.com/k3s-io/k3s/pkg/daemons/executor.(*Embedded).APIServer
	/go/src/github.com/k3s-io/k3s/pkg/daemons/executor/embed.go:124 +0xdd
"

@ShylajaDevadiga
Copy link
Contributor Author

Initial testing using commit 6de9566 from dev branch

$ kubectl get nodes
NAME              STATUS   ROLES                  AGE   VERSION
ip-172-31-2-236   Ready    etcd                   78m   v1.28.2+k3s-6de9566c
ip-172-31-3-46    Ready    <none>                 36s   v1.28.2+k3s-6de9566c
ip-172-31-6-240   Ready    control-plane,master   78m   v1.28.2+k3s-6de9566c

@ShylajaDevadiga
Copy link
Contributor Author

Validated using k3s version v1.28.3-rc1+k3s1 (6aef26e)

Environment Details

Infrastructure
Cloud EC2 instance

Node(s) CPU architecture, OS, and Version:
Ubuntu 22.04

Cluster Configuration:
3 server 1 agent

Config.yaml:
etcd-only

$ cat config.yaml 
token: secret
disable-apiserver: true
disable-scheduler: true
disable-controller-manager: true
node-taint: "node-role.kubernetes.io/etcd=true:NoExecute"
node-external-ip: <ip>

cp-only

$ cat config.yaml 
token: secret
server: https://serverIP:6443
disable-etcd: true
node-taint: "node-role.kubernetes.io/control-plane=true:NoSchedule"

Steps to reproduce the issue and validate the fix

  1. Copy config.yaml
  2. Install k3s

Validation results:
Validated nodes join correctly and basic functionality works

ubuntu@ip-172-31-6-232:~$ k3s -v
k3s version v1.28.3-rc1+k3s1 (6aef26e9)
go version go1.20.10
ubuntu@ip-172-31-6-232:~$ kubectl get nodes
NAME               STATUS   ROLES                  AGE     VERSION
ip-172-31-10-184   Ready    control-plane,master   3m14s   v1.28.3-rc1+k3s1
ip-172-31-10-254   Ready    control-plane,master   3m15s   v1.28.3-rc1+k3s1
ip-172-31-5-50     Ready    <none>                 3m9s    v1.28.3-rc1+k3s1
ip-172-31-6-232    Ready    etcd                   3m11s   v1.28.3-rc1+k3s1
ubuntu@ip-172-31-6-232:~$ kubectl get pods -A
NAMESPACE      NAME                                      READY   STATUS      RESTARTS   AGE
kube-system    coredns-6799fbcd5-xwmgs                   1/1     Running     0          3m4s
kube-system    helm-install-traefik-crd-8zz9s            0/1     Completed   0          3m5s
kube-system    helm-install-traefik-hx8hq                0/1     Completed   1          3m5s
kube-system    local-path-provisioner-84db5d44d9-q9r5z   1/1     Running     0          3m4s
kube-system    metrics-server-67c658944b-6dqv8           1/1     Running     0          3m4s
kube-system    svclb-traefik-37ca6bdf-fs2f6              2/2     Running     0          2m54s
kube-system    svclb-traefik-37ca6bdf-mqrf6              2/2     Running     0          2m54s
kube-system    svclb-traefik-37ca6bdf-wpg2g              2/2     Running     0          2m54s
kube-system    traefik-55f65f58b-h7bgr                   1/1     Running     0          2m54s
test-ingress   test-ingress-bfq7k                        1/1     Running     0          111s
test-ingress   test-ingress-t2xkr                        1/1     Running     0          111s
ubuntu@ip-172-31-6-232:~$ 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working status/blocker
Projects
Archived in project
Development

No branches or pull requests

3 participants