Skip to content

Commit

Permalink
Merge pull request #6507 from scjane/patch-18
Browse files Browse the repository at this point in the history
Update configure-upgrade-etcd.md
  • Loading branch information
tengqm committed Nov 30, 2017
2 parents dcf7e09 + 8a76c25 commit a49d851
Showing 1 changed file with 9 additions and 9 deletions.
18 changes: 9 additions & 9 deletions docs/tasks/administer-cluster/configure-upgrade-etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ Before starting the restore operation, a snapshot file must be present. It can e

If the access URLs of the restored cluster is changed from the previous cluster, the Kubernetes API server must be reconfigured accordingly. In this case, restart Kubernetes API server with the flag `--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag `--etcd-servers=$OLD_ETCD_CLUSTER`. Replace `$NEW_ETCD_CLUSTER` and `$OLD_ETCD_CLUSTER` with the respective IP addresses. If a load balancer is used in front of an etcd cluster, you might need to update the load balancer instead.

If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue.
If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue.

## Upgrading and rolling back etcd clusters

Expand Down Expand Up @@ -212,7 +212,7 @@ Note that we need to migrate both the etcd versions that we are using (from 2.2.
to at least 3.0.x) as well as the version of the etcd API that Kubernetes talks to. The etcd 3.0.x
binaries support both the v2 and v3 API.

This document describes how to do this migration. If you want to skip the
This document describes how to do this migration. If you want to skip the
background and cut right to the procedure, see [Upgrade
Procedure](#upgrade-procedure).

Expand All @@ -227,7 +227,7 @@ There are requirements on how an etcd cluster upgrade can be performed. The prim
Upgrade only one minor release at a time. For example, we cannot upgrade directly from 2.1.x to 2.3.x.
Within patch releases it is possible to upgrade and downgrade between arbitrary versions. Starting a cluster for
any intermediate minor release, waiting until the cluster is healthy, and then
shutting down the cluster down will perform the migration. For example, to upgrade from version 2.1.x to 2.3.y,
shutting down the cluster will perform the migration. For example, to upgrade from version 2.1.x to 2.3.y,
it is enough to start etcd in 2.2.z version, wait until it is healthy, stop it, and then start the
2.3.y version.

Expand All @@ -239,7 +239,7 @@ The etcd team has provided a [custom rollback tool](https://git.k8s.io/kubernete
but the rollback tool has these limitations:

* This custom rollback tool is not part of the etcd repo and does not receive the same
testing as the rest of etcd. We are testing it in a couple of end-to-end tests.
testing as the rest of etcd. We are testing it in a couple of end-to-end tests.
There is only community support here.

* The rollback can be done only from the 3.0.x version (that is using the v3 API) to the
Expand All @@ -263,13 +263,13 @@ rollback might require restarting all Kubernetes components on all nodes.
**Note**: At the time of writing, both Kubelet and KubeProxy are using “resource
version” only for watching (i.e. are not using resource versions for anything
else). And both are using reflector and/or informer frameworks for watching
(i.e. they don’t send watch requests themselves). Both those frameworks if they
(i.e. they don’t send watch requests themselves). Both those frameworks if they
can’t renew watch, they will start from “current version” by doing “list + watch
from the resource version returned by list”. That means that if the apiserver
will be down for the period of rollback, all of node components should basically
restart their watches and start from “now” when apiserver is back. And it will
be back with new resource version. That would mean that restarting node
components is not needed. But the assumptions here may not hold forever.
components is not needed. But the assumptions here may not hold forever.
{: .note}

### Design
Expand All @@ -284,7 +284,7 @@ focus on them at all. We focus only on the upgrade/rollback here.
### New etcd Docker image

We decided to completely change the content of the etcd image and the way it works.
So far, the Docker image for etcd in version X has contained only the etcd and
So far, the Docker image for etcd in version X has contained only the etcd and
etcdctl binaries.

Going forward, the Docker image for etcd in version X will contain multiple
Expand Down Expand Up @@ -337,7 +337,7 @@ script works as follows:
1. Verify that the detected version is 3.0.x with the v3 API, and the
desired version is 2.2.1 with the v2 API. We don’t support any other rollback.
1. If so, we run the custom tool provided by etcd team to do the offline
rollback. This tool reads the v3 formatted data and writes it back to disk
rollback. This tool reads the v3 formatted data and writes it back to disk
in v2 format.
1. Finally update the contents of the version file.

Expand All @@ -350,7 +350,7 @@ Simply modify the command line in the etcd manifest to:

Starting in Kubernetes version 1.6, this has been done in the manifests for new
Google Compute Engine clusters. You should also specify these environment
variables. In particular,you must keep `STORAGE_MEDIA_TYPE` set to
variables. In particular, you must keep `STORAGE_MEDIA_TYPE` set to
`application/json` if you wish to preserve the option to roll back.

```
Expand Down

0 comments on commit a49d851

Please sign in to comment.