-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
increase ansible fact_caching_timeout #9059
Conversation
Hi @rptaylor. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Thanks @rptaylor ! /approve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@rptaylor 🚀
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cristicalin, floryut, rptaylor The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
What this PR does / why we need it:
Kubespray plays can be very slow on large clusters. It can often take ~ 2-4 hours on clusters with ~100-200 nodes. (See timing measurements mentioned in #8050 ) The fact cache timeout is only 2 hours so the facts may have expired by the time a play finishes running, so then if you have to retry and run another play, it will be slower too because the facts need to be regathered again. Not only that, if the ansible_default_ipv4 fact is missing, the play gets even slower still, because then https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubespray-defaults/tasks/fallback_ips.yml runs a task on every node in the cluster , even if you have used --limit. This happened when I ran scale.yml.
Special notes for your reviewer:
Actually I don't see why the facts should expire anyway. Maybe the timeout should be 0.
Does this PR introduce a user-facing change?: