-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consistently Seeing Reflector Watch Errors on Controller Shutdown #2723
Comments
/kind support |
@jonathan-innis I meet the same problem, |
If the controller is shutting down, I don't think it's going to cause memory leaks. From looking through the code, it just looks like spurious error logging from the reflector as all the context cancels are happening, but I'm imagining there's a more graceful way to shut the reflector down so we don't see this. |
@laihezhao Your error also looks quite different from mine. Yours appears to be caused by 500s occurring somewhere on the apiserver. |
@troy0820 Got any thoughts here on how this can be improved? Ideally, we wouldn't be seeing errors for what appears to be a graceful shutdown for controller-runtime. |
@jonathan-innis I am going to investigate this but this looks like it can be a bug, so I will label the issue with it so we can triage it a little better. /kind bug |
I have seen this issue in our controllers happen often, it happens when a Custom Resource Definition for a resource that was prior accessed by the controller gets deleted (no matter whether the CR is controlled by the controller, or just accessed). This seems like an unnecessary behaviour; watching a resource that is already gone from the cluster seems not to be necessary, and definitely should not be creating logs that cannot be otherwise ignored. What could help is an option to disable the reflector watch. |
I think instead of this you can just do the following: Client: client.Options{
Cache: &client.CacheOptions{
DisableFor: []client.Object{
&corev1.ConfigMap{},
&corev1.Secret{},
},
},
}, The reason why using Unstructured helps is because Unstructred is not cached per default // CacheOptions are options for creating a cache-backed client.
type CacheOptions struct {
// Reader is a cache-backed reader that will be used to read objects from the cache.
// +required
Reader Reader
// DisableFor is a list of objects that should never be read from the cache.
// Objects configured here always result in a live lookup.
DisableFor []Object
// Unstructured is a flag that indicates whether the cache-backed client should
// read unstructured objects or lists from the cache.
// If false, unstructured objects will always result in a live lookup.
Unstructured bool
} |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
During controller shutdown, we consistently see errors that look like
This occurs extremely consistently during shutdown and I wouldn't expect that we would see something that looks like an error that comes through an INFO/WARN path. From looking at the reflector code, it seems like this "error" is coming from this line. Is there a way to ensure that the runnable shutdown doesn't fire this error every time that we shutdown?
As an example, these "errors" are coming in our Karpenter E2E testing here: https://github.com/aws/karpenter-provider-aws/actions/runs/8396432349/job/22997824261
The text was updated successfully, but these errors were encountered: