Skip to content

Commit

Permalink
Update website docs
Browse files Browse the repository at this point in the history
  • Loading branch information
stefanprodan committed Oct 29, 2018
1 parent 53c09f4 commit 3a28768
Showing 1 changed file with 89 additions and 140 deletions.
229 changes: 89 additions & 140 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

[![build](https://travis-ci.org/stefanprodan/flagger.svg?branch=master)](https://travis-ci.org/stefanprodan/flagger)
[![report](https://goreportcard.com/badge/github.com/stefanprodan/flagger)](https://goreportcard.com/report/github.com/stefanprodan/flagger)
[![codecov](https://codecov.io/gh/stefanprodan/flagger/branch/master/graph/badge.svg)](https://codecov.io/gh/stefanprodan/flagger)
[![license](https://img.shields.io/github/license/stefanprodan/flagger.svg)](https://github.com/stefanprodan/flagger/blob/master/LICENSE)
[![release](https://img.shields.io/github/release/stefanprodan/flagger/all.svg)](https://github.com/stefanprodan/flagger/releases)

Expand All @@ -19,7 +20,7 @@ Deploy Flagger in the `istio-system` namespace using Helm:

```bash
# add the Helm repository
helm repo add flagger https://stefanprodan.github.io/flagger
helm repo add flagger https://flagger.app

# install or upgrade
helm upgrade -i flagger flagger/flagger \
Expand All @@ -32,10 +33,11 @@ Flagger is compatible with Kubernetes >1.10.0 and Istio >1.0.0.

### Usage

Flagger requires two Kubernetes [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/):
one for the version you want to upgrade called _primary_ and one for the _canary_.
Each deployment must have a corresponding ClusterIP [service](https://kubernetes.io/docs/concepts/services-networking/service/)
that exposes a port named http or https. These services are used as destinations in a Istio [virtual service](https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService).
Flagger takes a Kubernetes deployment and creates a series of objects
(Kubernetes [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/),
ClusterIP [services](https://kubernetes.io/docs/concepts/services-networking/service/) and
Istio [virtual services](https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService))
to drive the canary analysis and promotion.

![flagger-overview](https://github.com/raw/stefanprodan/flagger/master/docs/diagrams/flagger-overview.png)

Expand All @@ -44,102 +46,69 @@ Gated canary promotion stages:
* scan for canary deployments
* check Istio virtual service routes are mapped to primary and canary ClusterIP services
* check primary and canary deployments status
* halt rollout if a rolling update is underway
* halt rollout if pods are unhealthy
* halt advancement if a rolling update is underway
* halt advancement if pods are unhealthy
* increase canary traffic weight percentage from 0% to 5% (step weight)
* check canary HTTP request success rate and latency
* halt rollout if any metric is under the specified threshold
* halt advancement if any metric is under the specified threshold
* increment the failed checks counter
* check if the number of failed checks reached the threshold
* route all traffic to primary
* scale to zero the canary deployment and mark it as failed
* wait for the canary deployment to be updated (revision bump) and start over
* increase canary traffic weight by 5% (step weight) till it reaches 50% (max weight)
* halt rollout while canary request success rate is under the threshold
* halt rollout while canary request duration P99 is over the threshold
* halt rollout if the primary or canary deployment becomes unhealthy
* halt rollout while canary deployment is being scaled up/down by HPA
* halt advancement while canary request success rate is under the threshold
* halt advancement while canary request duration P99 is over the threshold
* halt advancement if the primary or canary deployment becomes unhealthy
* halt advancement while canary deployment is being scaled up/down by HPA
* promote canary to primary
* copy canary deployment spec template over primary
* wait for primary rolling update to finish
* halt rollout if pods are unhealthy
* halt advancement if pods are unhealthy
* route all traffic to primary
* scale to zero the canary deployment
* mark rollout as finished
* wait for the canary deployment to be updated (revision bump) and start over

You can change the canary analysis _max weight_ and the _step weight_ percentage in the Flagger's custom resource.

Assuming the primary deployment is named _podinfo_ and the canary one _podinfo-canary_, Flagger will require
a virtual service configured with weight-based routing:
For a deployment named _podinfo_, a canary promotion can be defined using Flagger's custom resource:

```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
spec:
hosts:
- podinfo
http:
- route:
- destination:
host: podinfo
port:
number: 9898
weight: 100
- destination:
host: podinfo-canary
port:
number: 9898
weight: 0
```
Primary and canary services should expose a port named http:
```yaml
apiVersion: v1
kind: Service
metadata:
name: podinfo-canary
spec:
type: ClusterIP
selector:
app: podinfo-canary
ports:
- name: http
port: 9898
targetPort: 9898
```
Based on the two deployments, services and virtual service, a canary promotion can be defined using Flagger's custom resource:
```yaml
apiVersion: flagger.app/v1beta1
apiVersion: flagger.app/v1alpha1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
targetKind: Deployment
virtualService:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
primary:
# hpa reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
host: podinfo
canary:
name: podinfo-canary
host: podinfo-canary
service:
# container port
port: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
# Istio virtual service host names (optional)
hosts:
- app.istio.weavedx.com
canaryAnalysis:
# max number of failed checks
# before rolling back the canary
threshold: 10
# max number of failed metric checks before rollback
threshold: 5
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
stepWeight: 10
metrics:
- name: istio_requests_total
# minimum req success rate (non 5xx responses)
Expand All @@ -150,14 +119,14 @@ spec:
# maximum req duration P99
# milliseconds
threshold: 500
interval: 1m
interval: 30s
```
The canary analysis is using the following promql queries:
_HTTP requests success rate percentage_
```promql
```sql
sum(
rate(
istio_requests_total{
Expand All @@ -182,7 +151,7 @@ sum(

_HTTP requests milliseconds duration P99_

```promql
```sql
histogram_quantile(0.99,
sum(
irate(
Expand All @@ -198,8 +167,6 @@ histogram_quantile(0.99,

### Automated canary analysis, promotions and rollbacks

![flagger-canary](https://github.com/raw/stefanprodan/flagger/master/docs/diagrams/flagger-canary-hpa.png)

Create a test namespace with Istio sidecar injection enabled:

```bash
Expand All @@ -208,66 +175,72 @@ export REPO=https://github.com/raw/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml
```

Create the primary deployment, service and hpa:
Create a deployment and a horizontal pod autoscaler:

```bash
kubectl apply -f ${REPO}/artifacts/workloads/primary-deployment.yaml
kubectl apply -f ${REPO}/artifacts/workloads/primary-service.yaml
kubectl apply -f ${REPO}/artifacts/workloads/primary-hpa.yaml
kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml
```

Create the canary deployment, service and hpa:
Create a canary promotion custom resource (replace the Istio gateway and the internet domain with your own):

```bash
kubectl apply -f ${REPO}/artifacts/workloads/canary-deployment.yaml
kubectl apply -f ${REPO}/artifacts/workloads/canary-service.yaml
kubectl apply -f ${REPO}/artifacts/workloads/canary-hpa.yaml
kubectl apply -f ${REPO}/artifacts/canaries/canary.yaml
```

Create a virtual service (replace the Istio gateway and the internet domain with your own):
After a couple of seconds Flagger will create the canary objects:

```bash
kubectl apply -f ${REPO}/artifacts/workloads/virtual-service.yaml
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo
```

Create a canary promotion custom resource:
![flagger-canary-steps](https://github.com/raw/stefanprodan/flagger/master/docs/diagrams/flagger-canary-steps.png)

Trigger a canary deployment by updating the container image:

```bash
kubectl apply -f ${REPO}/artifacts/rollouts/podinfo.yaml
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.2.1
```

Canary promotion output:
Flagger detects that the deployment revision changed and starts a new rollout:

```
kubectl -n test describe canary/podinfo
Status:
Canary Revision: 16271121
Failed Checks: 6
Canary Revision: 19871136
Failed Checks: 0
State: finished
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Synced 3m flagger Starting canary deployment for podinfo.test
Normal Synced 3m flagger New revision detected podinfo.test
Normal Synced 3m flagger Scaling up podinfo.test
Warning Synced 3m flagger Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Normal Synced 3m flagger Advance podinfo.test canary weight 5
Normal Synced 3m flagger Advance podinfo.test canary weight 10
Normal Synced 3m flagger Advance podinfo.test canary weight 15
Warning Synced 3m flagger Halt podinfo.test advancement request duration 2.525s > 500ms
Warning Synced 3m flagger Halt podinfo.test advancement request duration 1.567s > 500ms
Warning Synced 3m flagger Halt podinfo.test advancement request duration 823ms > 500ms
Normal Synced 2m flagger Advance podinfo.test canary weight 20
Normal Synced 2m flagger Advance podinfo.test canary weight 25
Normal Synced 1m flagger Advance podinfo.test canary weight 30
Warning Synced 1m flagger Halt podinfo.test advancement success rate 82.33% < 99%
Warning Synced 1m flagger Halt podinfo.test advancement success rate 87.22% < 99%
Warning Synced 1m flagger Halt podinfo.test advancement success rate 94.74% < 99%
Normal Synced 1m flagger Advance podinfo.test canary weight 35
Normal Synced 55s flagger Advance podinfo.test canary weight 40
Normal Synced 45s flagger Advance podinfo.test canary weight 45
Normal Synced 35s flagger Advance podinfo.test canary weight 50
Normal Synced 25s flagger Copying podinfo-canary.test template spec to podinfo.test
Warning Synced 15s flagger Waiting for podinfo.test rollout to finish: 1 of 2 updated replicas are available
Normal Synced 5s flagger Promotion completed! Scaling down podinfo-canary.test
Normal Synced 25s flagger Copying podinfo.test template spec to podinfo-primary.test
Warning Synced 15s flagger Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Normal Synced 5s flagger Promotion completed! Scaling down podinfo.test
```

During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses the rollout.
Expand Down Expand Up @@ -313,45 +286,8 @@ Events:
Normal Synced 2m flagger Halt podinfo.test advancement success rate 55.06% < 99%
Normal Synced 2m flagger Halt podinfo.test advancement success rate 47.00% < 99%
Normal Synced 2m flagger (combined from similar events): Halt podinfo.test advancement success rate 38.08% < 99%
Warning Synced 1m flagger Rolling back podinfo-canary.test failed checks threshold reached 10
Warning Synced 1m flagger Canary failed! Scaling down podinfo-canary.test
```

Trigger a new canary deployment by updating the canary image:

```bash
kubectl -n test set image deployment/podinfo-canary \
podinfod=quay.io/stefanprodan/podinfo:1.2.1
```

Steer detects that the canary revision changed and starts a new rollout:

```
kubectl -n test describe canary/podinfo
Status:
Canary Revision: 19871136
Failed Checks: 0
State: finished
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Synced 3m flagger New revision detected podinfo-canary.test old 17211012 new 17246876
Normal Synced 3m flagger Scaling up podinfo.test
Warning Synced 3m flagger Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Normal Synced 3m flagger Advance podinfo.test canary weight 5
Normal Synced 3m flagger Advance podinfo.test canary weight 10
Normal Synced 3m flagger Advance podinfo.test canary weight 15
Normal Synced 2m flagger Advance podinfo.test canary weight 20
Normal Synced 2m flagger Advance podinfo.test canary weight 25
Normal Synced 1m flagger Advance podinfo.test canary weight 30
Normal Synced 1m flagger Advance podinfo.test canary weight 35
Normal Synced 55s flagger Advance podinfo.test canary weight 40
Normal Synced 45s flagger Advance podinfo.test canary weight 45
Normal Synced 35s flagger Advance podinfo.test canary weight 50
Normal Synced 25s flagger Copying podinfo-canary.test template spec to podinfo.test
Warning Synced 15s flagger Waiting for podinfo.test rollout to finish: 1 of 2 updated replicas are available
Normal Synced 5s flagger Promotion completed! Scaling down podinfo-canary.test
Warning Synced 1m flagger Rolling back podinfo.test failed checks threshold reached 10
Warning Synced 1m flagger Canary failed! Scaling down podinfo.test
```

### Monitoring
Expand Down Expand Up @@ -388,9 +324,22 @@ Advance podinfo.test canary weight 40
Halt podinfo.test advancement request duration 1.515s > 500ms
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo-canary.test template spec to podinfo-primary.test
Scaling down podinfo-canary.test
Promotion completed! podinfo-canary.test revision 81289
Copying podinfo.test template spec to podinfo-primary.test
Halt podinfo-primary.test advancement waiting for rollout to finish: 1 old replicas are pending termination
Scaling down podinfo.test
Promotion completed! podinfo.test
```

Flagger exposes Prometheus metrics that can be used to determine the canary analysis status and the destination weight values:

```bash
# Canary status
# 0 - running, 1 - successful, 2 - failed
flagger_canary_status{name="podinfo" namespace="test"} 1

# Canary traffic weight
flagger_canary_weight{workload="podinfo-primary" namespace="test"} 95
flagger_canary_weight{workload="podinfo" namespace="test"} 5
```

### Roadmap
Expand Down

0 comments on commit 3a28768

Please sign in to comment.