Understanding the Kubernetes Architecture in a simple way.

Understanding the Kubernetes Architecture in a simple way.

Kubernetes (K8s) is the de facto standard for container orchestration, making it easier to deploy, scale, and manage applications in a highly dynamic environment. But its architecture can be overwhelming at first. This guide will simplify the core components and interactions within Kubernetes, helping you grasp its inner workings with relatable examples from a DevOps perspective.


The Big Picture: What is Kubernetes?

Imagine Kubernetes as the brain of a modern software deployment system. It's designed to manage containers—lightweight, portable software environments—and ensure your applications run reliably. The architecture has two main parts:

  1. Control Plane: Think of it as the master controller managing the overall system.

  2. Worker Nodes: These are the actual workhorses where your applications run.

Let’s break these down.


1. Control Plane: The Brain of Kubernetes

The control plane ensures the desired state of the cluster is maintained. For example, if you want three replicas of your application running, the control plane ensures this remains true, even if a node goes down.

a) API Server

The API Server is the gateway to Kubernetes. It’s how all components—and even you—interact with the cluster. Whether you’re deploying a new application or scaling up, the API Server is where those requests are processed.

  • Analogy: Think of it as the receptionist of an office who directs your queries to the right department.

  • Real-life DevOps scenario: When you use kubectl apply -f deployment.yaml, it’s the API Server that processes this command and updates the cluster.

b) etcd (Distributed Key-Value Store)

This is the cluster’s memory. It stores all cluster data—like what pods exist and their states—in a distributed and highly available manner.

  • Analogy: Think of etcd as a super-reliable notebook for your Kubernetes cluster.

  • Real-life DevOps scenario: When you query the status of a pod, etcd is where the information is retrieved from.

c) Scheduler

The Scheduler’s job is to decide which node will run a new pod. It considers resource availability (CPU, memory) and other constraints to make this decision.

  • Analogy: Imagine you’re assigning tasks to employees based on their skillset and workload.

  • Real-life DevOps scenario: Deploying a resource-intensive AI model? The Scheduler ensures it runs on a node with enough memory.

d) Controller Manager

Controllers maintain the desired state of the system. Key controllers include:

  • Node Controller: Ensures nodes are healthy.

  • ReplicaSet Controller: Makes sure the specified number of replicas for a pod are running.

  • Deployment Controller: Manages deployments.

  • Analogy: Think of controllers as robots that continuously fix and adjust things to ensure the system works as planned.

  • Real-life DevOps scenario: If a node crashes, the ReplicaSet Controller spins up new pods on healthy nodes.

e) Cloud Controller Manager

This component integrates Kubernetes with cloud-specific features, such as load balancers and storage.

  • Analogy: Think of it as a translator between Kubernetes and your cloud provider.

2. Worker Nodes: The Workhorses of Kubernetes

Worker nodes are where your applications (pods) run. Each worker node has these key components:

a) kubelet

The kubelet ensures that containers are running in pods. It communicates with the API Server to receive instructions.

  • Analogy: Think of kubelet as the factory worker ensuring machines (containers) are running smoothly.

  • Real-life DevOps scenario: If a pod crashes, kubelet communicates with the API Server to restart it.

b) kube-proxy

kube-proxy handles networking. It ensures pods can communicate with each other and with the outside world.

  • Analogy: Think of kube-proxy as a traffic cop directing network traffic.

  • Real-life DevOps scenario: kube-proxy enables seamless communication between microservices in your application.

c) Container Runtime

This is the engine that runs your containers. Common options include Docker and containerd.

  • Analogy: If Kubernetes is the factory, the container runtime is the assembly line.

  • Real-life DevOps scenario: The container runtime pulls container images from a registry (e.g., Docker Hub) and runs them.

Advance Part….


Fig: K8S Detailed Architecture.

3. Networking: Connecting Everything

Networking is crucial for communication between pods and external users.

a) Pods

Pods are the smallest deployable units in Kubernetes. They usually host a single container.

b) Service Objects

Services expose pods to the network. Key types include:

  • ClusterIP: Internal communication within the cluster.

  • NodePort: Exposes a pod on a specific port.

  • LoadBalancer: Distributes traffic across multiple pods.

c) Ingress

Ingress manages HTTP and HTTPS traffic, acting as the entry point for external requests.

  • Real-life DevOps scenario: Use Ingress to route traffic to different microservices based on the URL.

4. Storage: Persistent Data in a Dynamic World

Kubernetes provides flexible storage options to persist data even if a pod is deleted.

a) Persistent Volumes (PVs)

These are storage resources provisioned by an admin.

b) Persistent Volume Claims (PVCs)

Pods use PVCs to request storage from PVs.

c) Storage Classes

Define the type of storage (e.g., SSD, HDD).

  • Real-life DevOps scenario: A database application can use PVs and PVCs to ensure data is not lost during updates or scaling.

5. Deployment and Scaling: Always On, Always Available

Kubernetes makes it easy to deploy and scale applications.

a) Deployments

Define the desired state of your application.

b) ReplicaSets

Ensure the specified number of pod replicas are running.

c) Horizontal Pod Autoscaler

Automatically adjusts the number of pods based on resource usage.

  • Real-life DevOps scenario: Your e-commerce app can scale up during sales and scale down afterward.

6. Optional Add-ons: Supercharging Kubernetes

a) CoreDNS

Provides DNS for service discovery within the cluster.

b) Metrics Server

Collects resource metrics (CPU, memory) for monitoring and scaling.

c) Helm

A package manager for Kubernetes.

d) Monitoring and Logging Stack

  • Prometheus: Metrics collection.

  • Grafana: Visual dashboards.

  • Fluentd: Log aggregation.

  • Real-life DevOps scenario: Use Grafana dashboards to monitor app performance and troubleshoot issues.


Visualizing Kubernetes

To help visualize the architecture, refer to the attached image. The control plane communicates with worker nodes through the API Server. Pods are managed by kubelet on each node, while kube-proxy handles networking. Services and Ingress route traffic to the right pods, and storage ensures data persistence. Add-ons like Helm and Prometheus enhance functionality.

Happy Learning..

Connect with me: https://www.linkedin.com/in/patilprathamesh6