-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kola/kubeadm: add kubernetes 1.22 test #196
Conversation
6fc6f1b
to
40310a4
Compare
The v1.22 tests fail on current release (2955) because kubeadm defaults to systemd cgroup driver whereas system docker uses cgroupfs. |
I was testing this PR with the Docker 20/CgroupV2 images and both flannel tests were failing - though the original test on main was passing for the same image. Turns out the original test registration had the same flaw as the "ugly..." bit and flannel tests were using cillium instead. With this fix the tests fail for flannel on current alpha: diff --git a/kola/tests/kubeadm/kubeadm.go b/kola/tests/kubeadm/kubeadm.go
index 44b47387..5b81da19 100644
--- a/kola/tests/kubeadm/kubeadm.go
+++ b/kola/tests/kubeadm/kubeadm.go
@@ -76,12 +76,13 @@ systemd:
func init() {
for _, CNI := range CNIs {
+ cni := CNI
register.Register(®ister.Test{
- Name: fmt.Sprintf("kubeadm.%s.base", CNI),
+ Name: fmt.Sprintf("kubeadm.%s.base", cni),
Distros: []string{"cl"},
ExcludePlatforms: []string{"esx"},
Run: func(c cluster.TestCluster) {
- kubeadmBaseTest(c, CNI)
+ kubeadmBaseTest(c, cni)
},
})
} But I'm able to deploy working flannel manually so we need to look into the setup code. |
@jepio thanks for testing and your feedback
This PR actually holds the "fix" to avoid passing a map reference to the test itself. I'll try to reproduce / fix the
Ok, then I guess we should add a minimal release version like you did in here: 6d21aef. |
I don't think this is very helpful (nodes don't get up), but here's the failure on main:
Here are the logs (I see some suspicious avc denied): |
@jepio thanks for the logs. SELinux could be (again) the issue - for the tests, SELinux is always set to enforce mode but we don't have a fully labelled system so I'm not sure how it behaves yet. We could keep SELinux in permissive mode with this register flag: https://github.com/kinvolk/mantle/blob/flatcar-master/kola/register/register.go#L33.
SELinux seems to be the only the difference between things done manually and things done with kola.
That would make sense then ⬆️ EDIT:
it confirms the SELinux thing - I'll provide a patch as soon as possible. (see also: flatcar/Flatcar#476, flatcar-archive/coreos-overlay#1181) |
@jepio alright, now the inter-mission is almost finished let's get back on the PR 😂
How do you think we should proceed ? The only way I see is to split the tests into two groups to ensure we keep running v1.21 tests on releases < 2955 and to run tests v1.22 on releases > 2955. |
I think as long as the test can work, we should make it work. We need to pass a bit more configuration to kubeadm (for master an workers)
Best would be to detect what docker is using on the node ( |
@jepio done: |
this commit brings a new release of kubernetes to test but it also fixes a `TODO:`. We are now able to provide multiple kubernetes release to test. We just need to create a new `map[string]interface{}` holding the params of the kubernetes release we want to test. ``` $ ./bin/kola list kubeadm.v1.21.0.calico.base kubeadm.v1.21.0.cilium.base kubeadm.v1.21.0.flannel.base kubeadm.v1.22.0.calico.base kubeadm.v1.22.0.cilium.base kubeadm.v1.22.0.flannel.base ``` Signed-off-by: Mathieu Tortuyaux <mathieu@kinvolk.io>
we now handle the `cgroup` driver
3e3cca0
to
1cd3c2f
Compare
commits squashed and rebased onto |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love what you did with the test - it previously mutated the global params struct for each of these tests... Much better now 🥇
@jepio still not perfect ! I wish we have a proper |
this commit brings a new release of kubernetes to test but it also fixes
a
TODO:
. We are now able to provide multiple kubernetes release totest.
We just need to create a new
map[string]interface{}
holding theparams of the kubernetes release we want to test.
Signed-off-by: Mathieu Tortuyaux mathieu@kinvolk.io
note for reviewers: