Use the below to set and initialize your backend config variables
terraform init -backend-config=backend.conf
backend.conf
bucket = "bucket-name"
key = "object-name"
region = "eu-west-2"
dynamodb_table = "table-name"
- Lab-01: Deploy a Kubernetes Cluster on AWS EC2 and attach a Cilium CNI
- Lab-02: Deploy a single pod to an EKS cluster
- Lab-03: Deploy a single pod to an EKS cluster, and then scale the pod using a ReplicaSet for high availability
- Lab-04: Deploy 2 Namespaces and set a Resource Quota in them
- Lab-05: Deploy a Replicaset and expose it using a Nodeport Service
- Lab-06: Deploy a Replicaset and expose it using a ClusterIP Service
- Lab-07: Deploy a Replicaset and an ExternalName service. Access the service via its metadata name from inside each pod
- Lab-08: Deploy a Replicaset and a LoadBalancer service. Use an AWS Load Balancer
- Lab-09: Manually Schedule a Pod to a Node
- Lab-10: Taint Nodes and Add Tolerations to Pods
- Lab-11: Labels, Node Selectors & Affinity, Pod Affinity and Anti-Affinity
- Lab-12: Set CPU Resource Limit to a Namespace and Deploy a Pod that Requests more CPU and also changes the CPU Limit
- Lab-13: Deploy a DaemonSet across 2 nodes, and scale it to 3
- Lab-14: Deploy a StaticPod into a Control Plane Node
- Lab-15: Deploy a Custom Scheduler and a Pod that Uses It
- Lab-17: Deploy a Metric Server and View the Metrics of Resources
- Lab-18: Use Command & Argument to Execute an Instruction in the Shell of a Pod
- Lab-19: Use a ConfigMap to Dynamically Pass Configs to a Container in a Pod via its Environmental Variables
- Lab-20: Attach a Secrets object to a Pod
- Lab-21: Configure a multi-container Pod and send each log to a file
- Lab-22: Configure init-container Pods which uses two services
- Lab-23a: Configure a deployment and use a hostPath volume type
- Lab-23b: Configure a deployment and use a hostPath volume type with a persistent volume and persistent volume claim
- Lab-24: Configure a deployment and use a static local volume type with a persistent volume and persistent volume claim
- Lab-25: Configure a deployment and use a dynamic EBS volume type with a persistent volume, persistent volume claim
- Lab-26: Configure a deployment and use a static EBS volume type with a persistent volume, persistent volume claim
- Lab-27: Configure a deployment and use a dynamic NFS volume type with a persistent volume and persistent volume claim
- Lab-28: Configure a deployment and use a dynamic local volume type with a persistent volume, persistent volume claim, and a snapshot
- Lab-29: Mounting a config map to a deployment as a volume
- Lab-30: Mounting secrets to a deployment as a volume
- Lab-31: Performing OS patches on a node
- Lab-32: Upgrading/Downgrading a cluster
- Lab-33: Backup & Restore an ETCD Cluster using a Volume
- Lab-34: User Authentication using Certificates and kubeconfig
- Lab-35: User Authorization using Roles and Role Bindings
- Lab-36: User Authorization using Cluster Roles and Cluster Role Bindings
- Lab-37: Service Accounts
- Lab-38: Private Image Repository
- Lab-39: Security Contexts
- Lab-40: Network Policies
Deploy a single pod to an EKS cluster, and then scale the pod using a ReplicaSet for high availability
Deploy a Replicaset and an ExternalName service. Access the service via its metadata name from inside each pod
Set CPU Resource Limit to a Namespace and Deploy a Pod that Requests more CPU and also changes the CPU Limit
Deploy a Deployment with 3 Replicas, then install a new version of the image using the Rolling Update strategy
Configure a deployment and use a hostPath volume type with a persistent volume and persistent volume claim
Configure a deployment and use a static local volume type with a persistent volume and persistent volume claim
Configure a deployment and use a dynamic EBS volume type with a persistent volume and persistent volume claim
Configure a deployment and use a static EBS volume type with a persistent volume and persistent volume claim
Configure a deployment and use a dynamic local volume type with a persistent volume, persistent volume claim and a snapshot
Create two clusters and ping between them








































