Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add control of the configuration refresh interval #1800

Merged
merged 1 commit into from
Dec 9, 2017

Conversation

maxlaverse
Copy link
Contributor

What this PR does / why we need it:
Make the syncRateLimiter configurable, and therefore how the often the configuration might be reloaded. This value is tightly coupled with the all the upstream pods lifecycle and therefore should be configurable.

@k8s-reviewable
Copy link

This change is Reviewable

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Dec 5, 2017
@coveralls
Copy link

Coverage Status

Coverage increased (+0.1%) to 36.7% when pulling 43dc5b9e9acc42339bad9487295869ec99899a58 on maxlaverse:configurable_refresh_interval into 6816630 on kubernetes:master.

@aledbf
Copy link
Member

aledbf commented Dec 5, 2017

@maxlaverse just in case this just limit the rate of the execution of the sync function not how often it runs

@maxlaverse
Copy link
Contributor Author

maxlaverse commented Dec 5, 2017

Sorry @aledbf , my description is confusing. But the PRs still achieve what I'm looking for.

This is the upper limit for the frequency of syncs. With a current value of 0.3 it means dead backends are removed from 0s to 3s after there are actually declared dead. Therefore we told all our teams to add a preStop hook with at least 4 seconds, to have a high probabilty for the dead backend to have been removed from the Nginx Ingress and not receive any traffic anymore, before there are being stopped.

This value changed twice in the past iirc. We'd like to have this configurable as it's tightly coupled to all the preStop delay we have and if it was decided that this value should change one day, we wouldn't be able to upgrade the Nginx Ingress Controller without updating all the preStop hooks or running a fork.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.04%) to 36.648% when pulling 0561ea8 on maxlaverse:configurable_refresh_interval into 6816630 on kubernetes:master.

@maxlaverse
Copy link
Contributor Author

Is there a command to re-trigger the tests ?

@coveralls
Copy link

Coverage Status

Coverage increased (+0.009%) to 36.614% when pulling 0561ea8 on maxlaverse:configurable_refresh_interval into 6816630 on kubernetes:master.

@aledbf
Copy link
Member

aledbf commented Dec 5, 2017

This is the upper limit for the frequency of syncs.

Correct

With a current value of 0.3 it means dead backends are removed from 0s to 3s after there are actually declared dead.

Please understand that this flag could end affecting the uptime of nginx. Changing this flag to something like 0.1 you could end reloading nginx once per second and that means you could end with lots of worker processes (in the case of pending requests)

@maxlaverse
Copy link
Contributor Author

Yes thanks, I'm aware of that :)

@aledbf
Copy link
Member

aledbf commented Dec 9, 2017

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Dec 9, 2017
@aledbf
Copy link
Member

aledbf commented Dec 9, 2017

@maxlaverse thanks!

@aledbf aledbf merged commit e02697e into kubernetes:master Dec 9, 2017
@maxlaverse maxlaverse deleted the configurable_refresh_interval branch December 9, 2017 18:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants