Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reorganize kubeadm files, part 1 #9439

Merged
merged 2 commits into from
Jul 18, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions content/en/docs/setup/independent/control-plane-flags.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@ reviewers:
- sig-cluster-lifecycle
title: Customizing control plane configuration with kubeadm
content_template: templates/concept
weight: 50
weight: 40
---

{{% capture overview %}}

kubeadm’s configuration exposes the following fields that can be used to override the default flags passed to control plane components such as the APIServer, ControllerManager and Scheduler:
The kubeadm configuration exposes the following fields that can override the default flags passed to control plane components such as the APIServer, ControllerManager and Scheduler:

- `APIServerExtraArgs`
- `ControllerManagerExtraArgs`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ reviewers:
- sig-cluster-lifecycle
title: Creating a single master cluster with kubeadm
content_template: templates/task
weight: 30
---

{{% capture overview %}}
Expand Down
1 change: 1 addition & 0 deletions content/en/docs/setup/independent/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ reviewers:
- sig-cluster-lifecycle
title: Creating Highly Available Clusters with kubeadm
content_template: templates/task
weight: 50
---

{{% capture overview %}}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Installing kubeadm
content_template: templates/task
weight: 30
weight: 20
---

{{% capture overview %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ reviewers:
- sig-cluster-lifecycle
title: Set up a Highly Availabile etcd Cluster With kubeadm
content_template: templates/task
weight: 60
---

{{% capture overview %}}
Expand Down
38 changes: 19 additions & 19 deletions content/en/docs/setup/independent/troubleshooting-kubeadm.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,28 @@
---
title: Troubleshooting kubeadm
content_template: templates/concept
weight: 70
---

{{% capture overview %}}

As with any program, you might run into an error using or operating it. Below we have listed
common failure scenarios and have provided steps that will help you to understand and hopefully
fix the problem.
As with any program, you might run into an error installing or running kubeadm.
This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem.

If your problem is not listed below, please follow the following steps:

- If you think your problem is a bug with kubeadm:
- Go to [github.com/kubernetes/kubeadm](https://github.com/kubernetes/kubeadm/issues) and search for existing issues.
- If no issue exists, please [open one](https://github.com/kubernetes/kubeadm/issues/new) and follow the issue template.

- If you are unsure about how kubeadm or kubernetes works, and would like to receive
support about your question, please ask on Slack in #kubeadm, or open a question on StackOverflow. Please include
- If you are unsure about how kubeadm works, you can ask on Slack in #kubeadm, or open a question on StackOverflow. Please include
relevant tags like `#kubernetes` and `#kubeadm` so folks can help you.

If your cluster is in an error state, you may have trouble in the configuration if you see Pod statuses like `RunContainerError`,
`CrashLoopBackOff` or `Error`. If this is the case, please read below.

{{% /capture %}}

#### `ebtables` or some similar executable not found during installation
{{% capture body %}}

## `ebtables` or some similar executable not found during installation

If you see the following warnings while running `kubeadm init`

Expand All @@ -37,7 +36,7 @@ Then you may be missing `ebtables`, `ethtool` or a similar executable on your no
- For Ubuntu/Debian users, run `apt install ebtables ethtool`.
- For CentOS/Fedora users, run `yum install ebtables ethtool`.

#### kubeadm blocks waiting for control plane during installation
## kubeadm blocks waiting for control plane during installation

If you notice that `kubeadm init` hangs after printing out the following line:

Expand Down Expand Up @@ -66,7 +65,7 @@ This may be caused by a number of problems. The most common are:

- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.

#### kubeadm blocks when removing managed containers
## kubeadm blocks when removing managed containers

The following could happen if Docker halts and does not remove any Kubernetes-managed containers:

Expand All @@ -92,7 +91,7 @@ Inspecting the logs for docker may also be useful:
journalctl -ul docker
```

#### Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state
## Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state

Right after `kubeadm init` there should not be any pods in these states.

Expand All @@ -106,14 +105,14 @@ Right after `kubeadm init` there should not be any pods in these states.
might have to grant it more RBAC privileges or use a newer version. Please file
an issue in the Pod Network providers' issue tracker and get the issue triaged there.

#### `coredns` (or `kube-dns`) is stuck in the `Pending` state
## `coredns` (or `kube-dns`) is stuck in the `Pending` state

This is **expected** and part of the design. kubeadm is network provider-agnostic, so the admin
should [install the pod network solution](/docs/concepts/cluster-administration/addons/)
of choice. You have to install a Pod Network
before CoreDNS may deployed fully. Hence the `Pending` state before the network is set up.

#### `HostPort` services do not work
## `HostPort` services do not work

The `HostPort` and `HostIP` functionality is available depending on your Pod Network
provider. Please contact the author of the Pod Network solution to find out whether
Expand All @@ -126,7 +125,7 @@ For more information, see the [CNI portmap documentation](https://github.com/con
If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
services](/docs/concepts/services-networking/service/#type-nodeport) or use `HostNetwork=true`.

#### Pods are not accessible via their Service IP
## Pods are not accessible via their Service IP

- Many network add-ons do not yet enable [hairpin mode](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip)
which allows pods to access themselves via their Service IP. This is an issue related to
Expand All @@ -139,7 +138,7 @@ services](/docs/concepts/services-networking/service/#type-nodeport) or use `Hos
is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
for an example.

#### TLS certificate errors
## TLS certificate errors

The following error indicates a possible certificate mismatch.

Expand All @@ -160,7 +159,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```

#### Default NIC When using flannel as the pod network in Vagrant
## Default NIC When using flannel as the pod network in Vagrant

The following error might indicate that something was wrong in the pod network:

Expand All @@ -174,7 +173,7 @@ Error from server (NotFound): the server could not find the requested resource

This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen.

#### Non-public IP used for containers
## Non-public IP used for containers

In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:

Expand All @@ -200,7 +199,7 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
systemctl restart kubelet
```

#### Services with externalTrafficPolicy=Local are not reachable
## Services with externalTrafficPolicy=Local are not reachable

On nodes where the hostname for the kubelet is overridden using the `--hostname-override` option, kube-proxy will default to treating 127.0.0.1 as the node IP, which results in rejecting connections for Services configured for `externalTrafficPolicy=Local`. This situation can be verified by checking the output of `kubectl -n kube-system logs <kube-proxy pod name>`:

Expand Down Expand Up @@ -239,3 +238,4 @@ EOF
)"

```
{{% /capture %}}