Skip to content

The Terraform code here will build a Kubernetes cluster using AWS EC2 instances and kubeadm.

Notifications You must be signed in to change notification settings

regisftm/aws-ec2-k8s-tf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

Terraform code to spin a K8s cluster using AWS EC2 instances

The Terraform code here will build a Kubernetes cluster using AWS EC2 instances and kubeadm.

1 - Infrastructure

The environment is built on AWS using Terraform. If you want to become more familiar with Terraform, it's time! : ) You will need an AWS account and Terraform installed on your computer.

  1. Start by cloning this repository:
git clone https://github.com/regisftm/aws-ec2-k8s-tf
  1. Change the directory to Terraform, and run the Terraform initialization:
cd aws-ec2-k8s-tf/terraform
terraform init
  1. Edit the variables.tf file and change accordingly. The default value will generate 1 EC2 instance type t3.small for the control-plane and 1 EC2 instance type t3.medium for the worker node. The AWS region selected is ca-central-1. Feel free to change the variable values to whatever you want in your environment. I can't promise that it will work well if you use smaller instance types.
vi variables.tf
  1. Apply the Terraform code. This code will build the EC2 instances and install Kubernetes and other software used in this demonstration.
terraform apply --auto-approve
  1. After a few minutes, you will see the output containing the created public IPs for the EC2 instances.
Apply complete! Resources: 12 added, 0 changed, 0 destroyed.

Outputs:

control_plane_public_ip = "3.96.49.113"
workers_public_ips = {
  "worker-01" = "3.99.20.164"
}

Go to the next step to finalize the Kubernetes Cluster Configuration and configure & install the Calico CNI.

2 - Kubernetes Cluster Configuration

After installing kubeadm, kubectl, and kubelet on both nodes and initializing the control-plane node with kubeadm, the next step is to join the worker node(s) to the cluster. Follow the instructions below to complete this process:

  1. The Terraform generated and saved a key pair in the terraform folder. Utilize this key to establish an SSH connection with the control-plane node and retrieve the kubeadm join command. Locate this command in the /var/log/cloud-init-output.log file.

    ssh -i ec2-nodes-key ubuntu@<control_plane_public_ip_address>
    grep "kubeadm.*discovery\|discovery.*kubeadm" /var/log/cloud-init-output.log

    The output will be something like:

    kubeadm join :6443 --token 9lbxla.pjsptj0m9wra8tyi --discovery-token-ca-cert-hash sha256:bfd99111c1f98dcb4ec225d2ec56fee13d2207057a2811eb67b217be8330c6ed
    
  2. Using the same key located in the owasp-toronto/terraform folder, open another terminal and ssh to the worker node (if you have more than one, repeat these steps for all of them).

    ssh -i owasp-key ubuntu@<worker_node_public_ip_address>
    sudo su - root

    Paste the kubeadm join command copied from the control-plane node.

    The output will look like the following:

    root@worker-01:~# kubeadm join 172.31.44.20:6443 --token 92ap7u.vwmkiesc0cjcdphp --discovery-token-ca-cert-hash sha256:d60463cc14666f454579eca7c26b61569b90da4d75aa912a293529f49194d50a
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    root@worker-01:~#
    

    After joining the worker node to the cluster, you can close the terminal connected to the worker node.

  3. From the terminal connected to the control-plane, verify if the node successfully joined the cluster by running the following command as root (use sudo su - root):

    sudo su - root
    kubectl get nodes

    The output should be:

    NAME            STATUS     ROLES           AGE     VERSION
    control-plane   Ready      control-plane   14m     v1.26.5
    worker-01       Ready      <none>          2m36s   v1.26.5
    

Congratulation, you did it! Now go enjoy your Kubernetes cluster!


Clean up

Go to the Terraform folder at aws-ec2-k8s-tf/terraform and use the command below.

terraform destroy --auto-approve

About

The Terraform code here will build a Kubernetes cluster using AWS EC2 instances and kubeadm.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages