Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limitador cluster EnvoyFilter controller #243

Merged
merged 5 commits into from
Sep 18, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
171 changes: 171 additions & 0 deletions controllers/limitador_cluster_envoyfilter_controller.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
/*
Copyright 2021 Red Hat, Inc.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package controllers

import (
"context"
"encoding/json"
"fmt"

"github.com/go-logr/logr"
limitadorv1alpha1 "github.com/kuadrant/limitador-operator/api/v1alpha1"
istioapinetworkingv1alpha3 "istio.io/api/networking/v1alpha3"
istioclientnetworkingv1alpha3 "istio.io/client-go/pkg/apis/networking/v1alpha3"
apierrors "k8s.io/apimachinery/pkg/api/errors"
meta "k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/predicate"
gatewayapiv1beta1 "sigs.k8s.io/gateway-api/apis/v1beta1"

"github.com/kuadrant/kuadrant-operator/pkg/common"
kuadrantistioutils "github.com/kuadrant/kuadrant-operator/pkg/istio"
"github.com/kuadrant/kuadrant-operator/pkg/reconcilers"
)

// LimitadorClusterEnvoyFilterReconciler reconciles a EnvoyFilter object with limitador's cluster
type LimitadorClusterEnvoyFilterReconciler struct {
*reconcilers.BaseReconciler
}

//+kubebuilder:rbac:groups=networking.istio.io,resources=envoyfilters,verbs=get;list;watch;create;update;patch;delete
eguzki marked this conversation as resolved.
Show resolved Hide resolved
//+kubebuilder:rbac:groups=gateway.networking.k8s.io,resources=gateways,verbs=get

// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.10.0/pkg/reconcile
func (r *LimitadorClusterEnvoyFilterReconciler) Reconcile(eventCtx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := r.Logger().WithValues("Gateway", req.NamespacedName)
logger.Info("Reconciling EnvoyFilter")
ctx := logr.NewContext(eventCtx, logger)

gw := &gatewayapiv1beta1.Gateway{}
if err := r.Client().Get(ctx, req.NamespacedName, gw); err != nil {
if apierrors.IsNotFound(err) {
logger.Info("no gateway found")
return ctrl.Result{}, nil
}
logger.Error(err, "failed to get gateway")
return ctrl.Result{}, err

Check warning on line 64 in controllers/limitador_cluster_envoyfilter_controller.go

View check run for this annotation

Codecov / codecov/patch

controllers/limitador_cluster_envoyfilter_controller.go#L63-L64

Added lines #L63 - L64 were not covered by tests
}

if logger.V(1).Enabled() {
jsonData, err := json.MarshalIndent(gw, "", " ")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why MarshalIndent? I wonder whether it's suitable for when using structured logging, aka production log mode.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, from the go-logr interface I can only check logger level, nothing about the format (defined by development or production mode). I see that this MarshalIndent call is not entirely correct as logger could be configured with structured format and debug level. By default (no LOG_* env vars), the logger is configured with info level and production level. When running with make run. it is debug and development. Thus, no issue with this so far.

Any better approach you are considering? Not for this PR, but considering that (almost) all controllers have this code in the beginning of the Reconcile method, it would be nice to have all of them fixed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't mean checking here what the log mode setting is. I just assumed that log level and log mode were two separate things, like in Authorino. But it looks like log mode and log level are somehow coupled together.

IMO, it does not have to be like that.

In Authorino, all 4 combinations are allowed, i.e.:

  • level: info, mode: production
  • level: debug, mode: production
  • level: info, mode: development
  • level: debug, mode: production

(Maybe the naming is a bit unfortunate, because production and development mean actually "structured" and "human-readable", respectively.)

Because it felt weird printing indented JSON when log mode is set to production, I'd probably have used json.Marshal instead. Not a big deal though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This project's log system is exactly the same, with those 4 combinations allowed.

Because it felt weird printing indented JSON when log mode is set to production, I'd probably have used json.Marshal instead. Not a big deal though.

Check my previous comment out again. You are right. However, if I change to Mashal, when I run in debug + development mode when I run with make run, it is not human readable. I need to indent it :)

So looking for alternatives that makes a good dev experience as well.

if err != nil {
return ctrl.Result{}, err
}

Check warning on line 71 in controllers/limitador_cluster_envoyfilter_controller.go

View check run for this annotation

Codecov / codecov/patch

controllers/limitador_cluster_envoyfilter_controller.go#L70-L71

Added lines #L70 - L71 were not covered by tests
logger.V(1).Info(string(jsonData))
}

err := r.reconcileRateLimitingClusterEnvoyFilter(ctx, gw)

if err != nil {
return ctrl.Result{}, err
}

logger.Info("EnvoyFilter reconciled successfully")
return ctrl.Result{}, nil
}

func (r *LimitadorClusterEnvoyFilterReconciler) reconcileRateLimitingClusterEnvoyFilter(ctx context.Context, gw *gatewayapiv1beta1.Gateway) error {
desired, err := r.desiredRateLimitingClusterEnvoyFilter(ctx, gw)
if err != nil {
return err
}

err = r.ReconcileResource(ctx, &istioclientnetworkingv1alpha3.EnvoyFilter{}, desired, kuadrantistioutils.AlwaysUpdateEnvoyFilter)
if err != nil {
return err
}

return nil
}

func (r *LimitadorClusterEnvoyFilterReconciler) desiredRateLimitingClusterEnvoyFilter(ctx context.Context, gw *gatewayapiv1beta1.Gateway) (*istioclientnetworkingv1alpha3.EnvoyFilter, error) {
logger, err := logr.FromContext(ctx)
if err != nil {
return nil, err
}

Check warning on line 103 in controllers/limitador_cluster_envoyfilter_controller.go

View check run for this annotation

Codecov / codecov/patch

controllers/limitador_cluster_envoyfilter_controller.go#L102-L103

Added lines #L102 - L103 were not covered by tests

ef := &istioclientnetworkingv1alpha3.EnvoyFilter{
TypeMeta: metav1.TypeMeta{
Kind: "EnvoyFilter",
APIVersion: "networking.istio.io/v1alpha3",
},
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("kuadrant-ratelimiting-cluster-%s", gw.Name),
Namespace: gw.Namespace,
},
Spec: istioapinetworkingv1alpha3.EnvoyFilter{
WorkloadSelector: &istioapinetworkingv1alpha3.WorkloadSelector{
Labels: common.IstioWorkloadSelectorFromGateway(ctx, r.Client(), gw).MatchLabels,
},
ConfigPatches: nil,
},
}

gateway := common.GatewayWrapper{Gateway: gw, PolicyRefsConfig: &common.KuadrantRateLimitPolicyRefsConfig{}}
rlpRefs := gateway.PolicyRefs()
logger.V(1).Info("desiredRateLimitingClusterEnvoyFilter", "rlpRefs", rlpRefs)

if len(rlpRefs) < 1 {
common.TagObjectToDelete(ef)
return ef, nil
}

kuadrantNamespace, err := common.GetKuadrantNamespace(gw)
if err != nil {
return nil, err
}

limitadorKey := client.ObjectKey{Name: common.LimitadorName, Namespace: kuadrantNamespace}
limitador := &limitadorv1alpha1.Limitador{}
err = r.Client().Get(ctx, limitadorKey, limitador)
logger.V(1).Info("desiredRateLimitingClusterEnvoyFilter", "get limitador", limitadorKey, "err", err)
if err != nil {
return nil, err
}

Check warning on line 142 in controllers/limitador_cluster_envoyfilter_controller.go

View check run for this annotation

Codecov / codecov/patch

controllers/limitador_cluster_envoyfilter_controller.go#L141-L142

Added lines #L141 - L142 were not covered by tests

if !meta.IsStatusConditionTrue(limitador.Status.Conditions, "Ready") {
return nil, fmt.Errorf("limitador Status not ready")
}

Check warning on line 146 in controllers/limitador_cluster_envoyfilter_controller.go

View check run for this annotation

Codecov / codecov/patch

controllers/limitador_cluster_envoyfilter_controller.go#L145-L146

Added lines #L145 - L146 were not covered by tests

configPatches, err := kuadrantistioutils.LimitadorClusterPatch(limitador.Status.Service.Host, int(limitador.Status.Service.Ports.GRPC))
if err != nil {
return nil, err
}

Check warning on line 151 in controllers/limitador_cluster_envoyfilter_controller.go

View check run for this annotation

Codecov / codecov/patch

controllers/limitador_cluster_envoyfilter_controller.go#L150-L151

Added lines #L150 - L151 were not covered by tests
ef.Spec.ConfigPatches = configPatches

// controller reference
if err := r.SetOwnerReference(gw, ef); err != nil {
return nil, err
}

Check warning on line 157 in controllers/limitador_cluster_envoyfilter_controller.go

View check run for this annotation

Codecov / codecov/patch

controllers/limitador_cluster_envoyfilter_controller.go#L156-L157

Added lines #L156 - L157 were not covered by tests

return ef, nil
}

// SetupWithManager sets up the controller with the Manager.
func (r *LimitadorClusterEnvoyFilterReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// Limitador cluster EnvoyFilter controller only cares about
// the annotation having references to RLP's
// kuadrant.io/ratelimitpolicies
For(&gatewayapiv1beta1.Gateway{}, builder.WithPredicates(predicate.AnnotationChangedPredicate{})).
Owns(&istioclientnetworkingv1alpha3.EnvoyFilter{}).
Complete(r)
}
133 changes: 133 additions & 0 deletions controllers/limitador_cluster_envoyfilter_controller_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
//go:build integration

package controllers

import (
"context"
"fmt"
"time"

. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
istioclientnetworkingv1alpha3 "istio.io/client-go/pkg/apis/networking/v1alpha3"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
logf "sigs.k8s.io/controller-runtime/pkg/log"
gatewayapiv1alpha2 "sigs.k8s.io/gateway-api/apis/v1alpha2"
gatewayapiv1beta1 "sigs.k8s.io/gateway-api/apis/v1beta1"

kuadrantv1beta2 "github.com/kuadrant/kuadrant-operator/api/v1beta2"
"github.com/kuadrant/kuadrant-operator/pkg/common"
limitadorv1alpha1 "github.com/kuadrant/limitador-operator/api/v1alpha1"
)

var _ = Describe("Limitador Cluster EnvoyFilter controller", func() {
var (
testNamespace string
gwName = "toystore-gw"
rlpName = "toystore-rlp"
efName = fmt.Sprintf("kuadrant-ratelimiting-cluster-%s", gwName)
)

beforeEachCallback := func() {
CreateNamespace(&testNamespace)
gateway := testBuildBasicGateway(gwName, testNamespace)
err := k8sClient.Create(context.Background(), gateway)
Expect(err).ToNot(HaveOccurred())

Eventually(func() bool {
existingGateway := &gatewayapiv1beta1.Gateway{}
err := k8sClient.Get(context.Background(), client.ObjectKeyFromObject(gateway), existingGateway)
if err != nil {
logf.Log.V(1).Info("[WARN] Creating gateway failed", "error", err)
return false
}

if meta.IsStatusConditionFalse(existingGateway.Status.Conditions, common.GatewayProgrammedConditionType) {
logf.Log.V(1).Info("[WARN] Gateway not ready")
return false
}

return true
}, 15*time.Second, 5*time.Second).Should(BeTrue())

ApplyKuadrantCR(testNamespace)

// Check Limitador Status is Ready
Eventually(func() bool {
limitador := &limitadorv1alpha1.Limitador{}
err := k8sClient.Get(context.Background(), client.ObjectKey{Name: common.LimitadorName, Namespace: testNamespace}, limitador)
if err != nil {
return false
}
if !meta.IsStatusConditionTrue(limitador.Status.Conditions, "Ready") {
return false
}
return true
}, time.Minute, 5*time.Second).Should(BeTrue())
}

BeforeEach(beforeEachCallback)
AfterEach(DeleteNamespaceCallback(&testNamespace))

Context("RLP targeting Gateway", func() {
It("EnvoyFilter created when RLP exists and deleted with RLP is deleted", func() {
// create ratelimitpolicy
rlp := &kuadrantv1beta2.RateLimitPolicy{
TypeMeta: metav1.TypeMeta{
Kind: "RateLimitPolicy",
APIVersion: kuadrantv1beta2.GroupVersion.String(),
},
ObjectMeta: metav1.ObjectMeta{
Name: rlpName,
Namespace: testNamespace,
},
Spec: kuadrantv1beta2.RateLimitPolicySpec{
TargetRef: gatewayapiv1alpha2.PolicyTargetReference{
Group: gatewayapiv1beta1.Group("gateway.networking.k8s.io"),
Kind: "Gateway",
Name: gatewayapiv1beta1.ObjectName(gwName),
},
Limits: map[string]kuadrantv1beta2.Limit{
"l1": {
Rates: []kuadrantv1beta2.Rate{
{
Limit: 1, Duration: 3, Unit: kuadrantv1beta2.TimeUnit("minute"),
},
},
},
},
},
}
err := k8sClient.Create(context.Background(), rlp)
Expect(err).ToNot(HaveOccurred())
// Check RLP status is available
rlpKey := client.ObjectKey{Name: rlpName, Namespace: testNamespace}
Eventually(testRLPIsAvailable(rlpKey), time.Minute, 5*time.Second).Should(BeTrue())

// Check envoy filter
Eventually(func() bool {
existingEF := &istioclientnetworkingv1alpha3.EnvoyFilter{}
efKey := client.ObjectKey{Name: efName, Namespace: testNamespace}
err = k8sClient.Get(context.Background(), efKey, existingEF)
if err != nil {
return false
}
return true
}, 15*time.Second, 5*time.Second).Should(BeTrue())

err = k8sClient.Delete(context.Background(), rlp)
Expect(err).ToNot(HaveOccurred())

// Check envoy filter is gone
Eventually(func() bool {
existingEF := &istioclientnetworkingv1alpha3.EnvoyFilter{}
efKey := client.ObjectKey{Name: efName, Namespace: testNamespace}
err = k8sClient.Get(context.Background(), efKey, existingEF)
return apierrors.IsNotFound(err)
}, 15*time.Second, 5*time.Second).Should(BeTrue())
})
})
})
Loading
Loading