-
Notifications
You must be signed in to change notification settings - Fork 225
Add experimental-flannel-overlay flag #69
Add experimental-flannel-overlay flag #69
Conversation
I really like this, we'll get rid of 200 lines :) However, I don't know if it's recommended for use... or if kubenet is a better approach long-term. So I vote for going with this for now, and then switch over to kubenet (probably with cni v0.4) What do the networking team think? |
@luxas In my understanding the --experimental-flannel-overlay alone has effect on this setup. I tried it and did not see any change in the network setup. It is intended to be used with --configure-cbr0=true. See https://github.com/kubernetes/kubernetes/blob/release-1.2/pkg/kubelet/kubelet.go#L2763. I assume a docker restart will be needed afterwards. I tried to run it in hyperkube:
The kubelet fails when trying to write the docker opts. BTW: the --configure-cbr0 option does work when executed in hyperkube, however, I do not think it is of much help Overall, I think the flag is intended to be used without hyperkube. I think the only option for hyperkube is the CNI plugin, however, it fails without error log when executing in hyperkube.... |
@zreigz I assume the e2e tests do not verify the network setup. I assume you get the same results without the flag. |
Ok, seems like it was just too good to be true :) Guess it failed silently because it didn't recognize the options from k8s, but that should be fixed now |
Currently, I have tested:
|
@luxas I was thinking about an alternative approach: In the master.sh script, we could pull kubelet, cni and manifest files out of hyperkube, create a unit file for kubelet and run on host directly. Advantages:
IMHO, better than the current script but not as good as making the hyperkube work for multi-node. |
Some failing tests indicate network problem:
|
@zreigz what is the setup from the test results? |
|
Can we close this PR? |
I will close it because we have been going in CNI direction |
Works fine for me. |
Can someone elaborate on what "we have been going in CNI direction" means? Or point me to an issue/design doc about this? |
We want to get rid of the restart of the main docker daemon and instead implement CNI/kubenet as the overlay network provider. If we can remove the restart, we can remove all OS dependent code, and we're getting closer to "run a kubernetes cluster anywhere where docker is above the vX.Y version". I'm not an expert on CNI/kubenet, yet :), but I think it's doable at some point quite soon.
|
We are testing hypercube with flannel cni plugin. There are some problems.
when you start the newest hypercube with cni plugin the add-ons crash because the network problems. We haven't test it before so we don't know if it is regression or something new. |
were already running with Kubernet on by default, restart isn't necessary in HEAD. I (and probably no one else) haven't tested with the cni flannel plugin yet, so if you find issue please report. |
OK I will create the issue. |
Issue crerated: kubernetes/kubernetes#27603 |
This PR is some kind of proof of concept for experimental-flannel-overlay flag. Because it is not well documented yet I was experimenting with this. I've removed docker "bootstrap" service and flannel. What I've seen from logs it uses
hairpin
plugin withhairpin-veth
mode. I've executed e2e test with different hyperkube versionsv1.2.0
v1.2.4
v1.3.0.alpha5
I hope it will open discussion about networking in docker-multinode project.