Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deployment without central controller #487

Closed
pohly opened this issue Sep 28, 2020 · 2 comments · Fixed by #524
Closed

deployment without central controller #487

pohly opened this issue Sep 28, 2020 · 2 comments · Fixed by #524
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@pohly
Copy link
Contributor

pohly commented Sep 28, 2020

For CSI drivers which manage local storage it is difficult to provide dynamic provisioning support because the central driver instance needs some way of communicating with the driver instance on each node. It would be nice if:

  • external-provisioner could be deployed on each node together with the CSI driver
  • These external-provisioner instances collaborated with each other on provisioning of PVCs, with support for late binding and immediate binding.
@pohly
Copy link
Contributor Author

pohly commented Sep 28, 2020

/kind feature

A previous prototype for late binding support was posted in #367

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 28, 2020
@pohly
Copy link
Contributor Author

pohly commented Sep 28, 2020

Immediate binding is harder. Two approaches are possible:

  • leader election
  • Let all instances try to set the "selected node" annotation: the one who wins owns the PVC.

The second approach is very simple to implement, I have a prototype. it simply builds on top of the work for late binding.

The leader election approach is harder to implement because the leader election helper code needs to be extended.

Performance characteristics will be different; it's not obvious whether either of the two approaches is consistently better than the other.

pohly added a commit to pohly/pmem-CSI that referenced this issue Oct 30, 2020
With driver mode "both", each node instance of PMEM-CSI implements
both the node and the controller CSI service. Volume provisioning then
can be done by deploying external-provisioner on each node and
configuring it to do distributed
provisioning (kubernetes-csi/external-provisioner#487).
This was referenced Nov 3, 2020
pohly added a commit to pohly/pmem-CSI that referenced this issue Nov 10, 2020
With driver mode "both", each node instance of PMEM-CSI implements
both the node and the controller CSI service. Volume provisioning then
can be done by deploying external-provisioner on each node and
configuring it to do distributed
provisioning (kubernetes-csi/external-provisioner#487).
pohly added a commit to pohly/pmem-CSI that referenced this issue Nov 19, 2020
With driver mode "both", each node instance of PMEM-CSI implements
both the node and the controller CSI service. Volume provisioning then
can be done by deploying external-provisioner on each node and
configuring it to do distributed
provisioning (kubernetes-csi/external-provisioner#487).
pohly added a commit to pohly/pmem-CSI that referenced this issue Dec 10, 2020
With driver mode "both", each node instance of PMEM-CSI implements
both the node and the controller CSI service. Volume provisioning then
can be done by deploying external-provisioner on each node and
configuring it to do distributed
provisioning (kubernetes-csi/external-provisioner#487).
pohly added a commit to pohly/pmem-CSI that referenced this issue Dec 17, 2020
With driver mode "both", each node instance of PMEM-CSI implements
both the node and the controller CSI service. Volume provisioning then
can be done by deploying external-provisioner on each node and
configuring it to do distributed
provisioning (kubernetes-csi/external-provisioner#487).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants