-
Notifications
You must be signed in to change notification settings - Fork 144
K3s
Mehdi Hadeli edited this page Apr 6, 2023
·
2 revisions
- Quick-Start Guide
- Organizing Cluster Access Using kubeconfig Files
- Installation Configuration Options ⭐
- K3s Configuration with Yaml file instead of passing CLI arguments ⭐
- K3s Server Configuration ⭐
- K3s Agent Configuration
- Advanced Options and Configuration
- Networking - CoreDNS, Traefik Ingress controller, Klipper Load Balancer(ServiceLB) ⭐
- Helm and K3s ⭐
- Installing Helm
- Cluster Access ⭐
-
/etc/rancher/k3s/k3s.yaml
is world readable ⭐⭐ - How to migrate from Helm v2 to Helm v3
- Stopping K3s
- Restarting K3s
- Kubernetes Dashboard
- Deploy and Access the Kubernetes Dashboard
- Install And Configure Traefik Proxy with Helm
- Use the Helm Chart ⭐
- Traefik & Kubernetes ⭐
- Quick Start Traefik ⭐
- How to deploy Traefik Ingress Controller on Kubernetes using Helm ⭐
- How to view status of a service on Linux using systemctl
- Set environment variable in Windows and WSL Linux in terminal
- How to Set Environment Variables in Linux
- In Ubuntu WSL, how can you store permanent environment variables?
- Why doesn't .bashrc run automatically? ⭐
- Setting up your own K3S home cluster ⭐⭐
- WARNING: Kubernetes configuration file is group/world-readable ⭐⭐
- Configure SSL certificate with cert-manager on Kubernetes
- Installing Cert manager with Helm
- Metallb INSTALLATION
- Metallb Layer 2 Configuration
- Accessing network applications with WSL
- How to access host ip and port?
- Fully Automated K3S etcd High Availability Install
- Configuring Traefik 2 Ingress for Kubernetes
- The FASTEST Way to run Kubernetes at Home - k3s Ansible Automation
- High Availability Rancher on a Kubernetes Cluster
- HIGH AVAILABILITY k3s (Kubernetes) in minutes!
- A kubeconfig file will be written to
/etc/rancher/k3s/k3s.yaml
and thekubectl
installed byK3s
will automatically use it. - By default,
kubectl
looks for a file namedconfig
in the$HOME/.kube
directory. You can specify other kubeconfig files by setting theKUBECONFIG
environment variable or by setting the--kubeconfig
flag. - installing traefik with k3s with k3s
[ServiceLB (Klipper Load Balancer)](https://docs.k3s.io/networking#service-load-balancer)
load-balancer or install traefik helm manually and install metallb for load-balancing - When we install traefik manullay with metallb as load-balancer for load-balancer, metallb reserved ips are not accessible outside of
wsl
because our wsl only accessible with its public ipwsl hostname -I
, but by installing traefik and k3s load-balancer[ServiceLB (Klipper Load Balancer)](https://docs.k3s.io/networking#service-load-balancer)
this problem will solve with assigningwsl hostname -I
ip to load-balancer service type and traefik because this is ip of our wsl and it is accessible from the windows - traefik helm install as default in k3s but for activating dashboard we should define a custom HelmChartConfig for traefik helm, also we can install traefik ingress manually with helm by disable installing with installation scrip in yaml configuration with
disable: "traefik"
. traefik ingress controller as default is behind of aexternal load-balancer
service type, if we use could provider, cloud load-balancer service type else we should installmetallb
else this service type will be pending (we can use its created node-port because each load-balancer service type creates a node-port on each cluster internally), with both approach manually or installing with k3s will create aload-balancer
service type so if we don't have a load-balancer likemetallb
traefik service status inEXTERNAL-IP
section will bePending
- If the ServiceLB Pod runs on a node that has an
external IP
configured, the node's external IP is populated into the Service's status.loadBalancer.ingress address list. Otherwise, the node's internal IP is used. we can see detail of external and internal ip cluster node withkubectl get nodes -o wide
and for k3s with one cluster external ip is empty.
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
mehdi Ready control-plane,etcd,master 2d5h v1.25.6+k3s1 172.21.-.- <none>
- It is possible to expose multiple load-balancer Services type on the same node, as long as they use
different ports
. - If you try to create a
LoadBalancer Service
that listens on port80
, the ServiceLB will try to find afree host
(cluster node internal ip) in the cluster for port80
. If no host with that port is available, the LB will remainPending
, so in this case we should use different port on the host for load-balancer service type. - we setup Cert-Manager as certificate manager
- setup rancher as cluster management
- According to the Kubernetes documentation, it is recommended to put resources related to the same microservice or application tier into the same file[1]. This helps in organizing and managing resources more efficiently. It also makes it easier to understand and maintain the configuration files.