Skip to content

Latest commit

 

History

History
167 lines (115 loc) · 6.62 KB

development.md

File metadata and controls

167 lines (115 loc) · 6.62 KB

Development and Testing Setup

This document explains more details of the development and testing setup that is also presented in Getting Started With Controller Sharding.

Development Cluster

The setup's basis is a local kind cluster. This simplifies developing and testing the project as it comes without additional cost, can be thrown away easily, and one doesn't need to push development images to a remote registry. In other words, there are no prerequisites for getting started with this project other than a Go and Docker installation.

# create a local cluster
make kind-up
# target the kind cluster
export KUBECONFIG=$PWD/hack/kind_kubeconfig.yaml

# delete the local cluster
make kind-down

If you want to use another cluster for development (e.g., a remote cluster) simply set the KUBECONFIG environment variable as usual and all make commands will target the cluster pointed to by your kubeconfig. Note that you might need to push images to a remote registry though.

Components

The development setup reuses the deployment manifests of the main sharding components developed in this repository, located in config. See Install the Sharding Components.

It also includes the example shard (see Implement Sharding in Your Controller) and the webhosting-operator (see Evaluating the Sharding Mechanism).

Apart from this, the development setup also includes some external components, located in hack/config. This includes cert-manager, ingress-nginx, kube-prometheus, kyverno, and parca. These components are installed for a seamless development and testing experience but also for this project's Evaluation on a remote cluster in the cloud.

Deploying, Building, Running Using Skaffold

Use make deploy to deploy all components with pre-built images using skaffold. You can overwrite the used images via make variables, e.g., the TAG variable:

make deploy TAG=latest

For development, skaffold can build fresh images based on your local changes using ko, load them into your local cluster, and deploy the configuration:

make up

Alternatively, you can also start a skaffold-based dev loop which can automatically rebuild and redeploy images as soon as source files change:

make dev
# runs initial build and deploy...
# press any key to trigger a fresh build after changing sources

If you're not working with a local kind cluster, you need to set SKAFFOLD_DEFAULT_REPO to a registry that you can push the dev images to:

make up SKAFFOLD_DEFAULT_REPO=ghcr.io/timebertt/dev-images

Remove all components from the cluster:

make down

For any skaffold-based make command, you can set SKAFFOLD_MODULE to target only a specific part of the skaffold configuration:

make dev SKAFFOLD_MODULE=sharder

Running on the Host Machine

Instead of running the sharder in the cluster, you can also run it on your host machine targeting your local kind cluster. This doesn't deploy all components as before but only cert-manager for injecting the webhook's CA bundle. Assuming a fresh kind cluster:

make run

Now, create the example ClusterRing and run a local shard:

make run-shard

You should see that the shard successfully announced itself to the sharder:

$ kubectl get lease -L alpha.sharding.timebertt.dev/clusterring,alpha.sharding.timebertt.dev/state
NAME             HOLDER           AGE   CLUSTERRING   STATE
shard-h9np6f8c   shard-h9np6f8c   8s    example       ready

$ kubectl get clusterring
NAME      READY   AVAILABLE   SHARDS   AGE
example   True    1           1        15s

Running the shard locally gives you the option to test non-graceful termination, i.e., a scenario where the shard fails to renew its lease in time. Simply press Ctrl-C twice:

make run-shard
...
^C2023-11-24T15:16:50.948+0100	INFO	Shutting down gracefully in 2 seconds, send another SIGINT or SIGTERM to shutdown non-gracefully
^Cexit status 1

Testing the Sharding Setup

Independent of the used setup (skaffold-based or running on the host machine), you should be able to create sharded ConfigMaps in the default namespace as configured in the example ClusterRing. The Secrets created by the example shard controller should be assigned to the same shard as the owning ConfigMap:

$ kubectl create cm foo
configmap/foo created

$ kubectl get cm,secret -L shard.alpha.sharding.timebertt.dev/clusterring-50d858e0-example
NAME            DATA   AGE    CLUSTERRING-50D858E0-EXAMPLE
configmap/foo   0      3s     shard-5fc87c9fb7-kfb2z

NAME               TYPE     DATA   AGE    CLUSTERRING-50D858E0-EXAMPLE
secret/dummy-foo   Opaque   0      3s     shard-5fc87c9fb7-kfb2z

Monitoring

When using the skaffold-based setup, you also get a full monitoring setup for observing and analyzing the components' resource usage.

To access the monitoring dashboards and metrics in Grafana, simply forward its port and open http://localhost:3000/ in your browser:

kubectl -n monitoring port-forward svc/grafana 3000 &

The password for Grafana's admin user is written to hack/config/monitoring/default/grafana_admin_password.secret.txt.

Be sure to check out the controller-runtime dashboard: http://localhost:3000/d/PuCBL3zVz/controller-runtime-controllers

Continuous Profiling

To dig deeper into the components' resource usage, you can deploy the continuous profiling setup based on Parca:

make up SKAFFOLD_MODULE=profiling SKAFFOLD_PROFILE=profiling

To access the profiling data in Parca, simply forward its port and open http://localhost:7070/ in your browser:

kubectl -n parca port-forward svc/parca 7070 &

For accessing Parca through its Ingress, use the basic auth password for the parca user from hack/config/profiling/parca_password.secret.txt.

Note that the Parca deployment doesn't implement retention for profiling data. I.e., the Parca data volume will grow infinitely as long as Parca is running. To shut down Parca after analyzing the collected profiles and destroying the persistent volume use the following command:

make down SKAFFOLD_MODULE=profiling SKAFFOLD_PROFILE=profiling