Skip to content

Latest commit

 

History

History
 
 

kubernetes-autodiscover

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Observability Helm charts End-To-End tests

Motivation

Kubernetes autodiscover is a key feature of any observability solution for this orchestrator, resources change dynamically and observers configurations have to adapt to these changes. Discovery of resources in Kubernetes poses some challenges, there are several corner cases that need to be handled, that involve keeping track of changes of state that are not always deterministic. This complicates the implementation and its testing. With lack of good test coverage and many cases to cover, it is easy to introduce regressions even in basic use cases. This suite covers a set of use cases that are better tested with real Kubernetes implementations.

How do the tests work?

At the topmost level, the test framework uses a BDD framework written in Go, where we set the expected behavior of use cases in a feature file using Gherkin, and implementing the steps in Go code.

kubectl is used to configure resources in a Kubernetes cluster. kind can be used to provide a local cluster.

The tests will follow this general high-level approach:

  1. It uses kubectl to interact with a Kubernetes cluster.
  2. If there is no configured Kubernetes cluster in kubectl, a new one is deployed using kind. If a cluster is created, it is also removed after the suite is executed.
  3. Execute BDD steps representing each scenario. Each scenario is executed in a different Kubernetes namespace, so it is easier to cleanup and avoid one scenarios affecting the others. These namespaces are created and destroyed before and after each scenario.
  4. New scenarios can be configured providing the Gherkin definition and templatized Kubernetes manifests.

Adding new scenarios

Scenarios defined in this suite are based in a sequence of actions and expectations defined in the feature files, and in some templates of resources to deploy in Kubernetes. Templates are stored in testdata/templates, and must have the .yml.tmpl extension.

Several of the available steps can be parameterized with template names, these names can be written as the name of the template without the extension. Spaces in these names are replaced with hyphens.

There are steps intended to define a desired state for the resources in the template, such as the following ones:

  • "filebeat" is running deploys the template filebeat.yml.tmpl and waits for filebeat pods to be running. This step expects some pod to be labeled with k8s-app:filebeat.
  • "a service" is deployed deploys the resources in the template a-service.yml.tmpl, and continues without expecting any state of the deployed resources.
  • "a pod" is deleted deletes the resources defined in the template a-pod.yml.tmpl.

Any of these steps can be parameterized with an option that can be used to select different configuration blocks in the template. For example the following step would select the configuration block marked as monitor annotations in the template:

  `"a service" is deployed with "monitor annotations"

These option blocks can be defined in the template like this:

meta:
  annotations:
{{ if option "monitor annotations" }}
    co.elastic.monitor/type: tcp
    co.elastic.monitor/hosts: "${data.host}:6379"
{{ end }}

Steps defining expectations are mostly based in checking the events generated by the deployed observers. Steps available for that are like the following ones:

  • "filebeat" collects events with "kubernetes.pod.name:a-pod" checks that the filebeat pod has collected at least one event with the field kubernetes.pod.name set to a pod.
  • "metricbeat" does not collect events with "kubernetes.pod.name:a-pod" during "30s" expects to have a period of time of 30 seconds without collecting events with the given field and value.

These steps expect to find the events in the /tmp/beats-events file in pods marked with the label k8s-app.

There are other more specific steps. Examples for them can be found in the feature files.

Running the tests

  1. Clone this repository, say into a folder named e2e-testing.

    git clone git@github.com:elastic/e2e-testing.git
  2. Configure the version of the tools you want to test (Optional).

This is an example of the optional configuration:

# Depending on the versions used,
export BEAT_VERSION=7.12.0 # version of beats to use
export GITHUB_CHECK_SHA1=0123456789 # to select snapshots built by beats-ci
export KUBERNETES_VERSION="1.18.2" # version of the cluster to be passed to kind
  1. Install dependencies.

    • Install Kubectl 1.18 or newer
    • Install Kind 0.10.0 or newer
    • Install Go, using the language version defined in the .go-version file at the root directory. We recommend using GVM, same as done in the CI, which will allow you to install multiple versions of Go, setting the Go environment in consequence: eval "$(gvm 1.15.9)"
    • Godog and other test-related binaries will be installed in their supported versions when the project is first built, thanks to Go modules and Go build system.
  2. Run the tests.

    cd e2e/_suites/kubernetes-autodiscover
    OP_LOG_LEVEL=DEBUG go test -timeout 60m -v

    Optionally, you can run only one of the feature files

    cd e2e/_suites/kubernetes-autodiscover
    OP_LOG_LEVEL=DEBUG go test -timeout 60m -v --godog.tags='@filebeat'

    The tests will take a few minutes to run, spinning up the Kubernetes cluster if needed.

Diagnosing test failures

Problems with the environment

If a Kubernetes cluster is pre-configured in kubectl, you can directly use this command to investigate the resources deployed in the cluster by the suite. If the cluster was deployed by the suite, it will have a randomized name, and will use a temporary configuration file for kubectl.

The name of the cluster can be obtained with kubectl get clusters, clusters created by this suite will follow the pattern kind-<random uuid>.

The temporary configuration file is logged by the suite at the info level. If a cluster is created by the suite, you will see something like this:

INFO[0000] Kubernetes cluster not available, will start one using kind
INFO[0000] Using kind v0.10.0 go1.15.7 linux/amd64
INFO[0046] Kubeconfig in /tmp/test-252418601/kubeconfig

Then you could use the following command to control the resources with kubectl:

kubectl --kubeconfig /tmp/test-252418601/kubeconfig ...

Each scenario creates its own namespace, you can find them with kubectl get ns, they will follow the pattern test-<random uuid>.

Interrupting the tests with Ctrl-C will leave all resources as they were, you can use the previous instructions to investigate problems or access to logs of the deployed pods.

I cannot move on

Please open an issue here: https://github.com/elastic/e2e-testing/issues/new