Why Kubernetes Emerged Despite Docker's Presence (Part-1)
While Docker streamlined the journey of containers, a closer look reveals certain limitations that prompted the need for Kubernetes to step in and address critical challenges.
First of all, let's understand about the Containers:
Think of a container as a self-contained unit, akin to a shipping container in the world of logistics. Just as a shipping container holds various goods, a software container encapsulates everything an application requires to run smoothly. Containers are encapsulated, lightweight environments that package applications and their dependencies.
Let me give a quick example-
Web Application Container:
Consider a web application container. Inside this container, you'll find the application's code, like HTML, CSS, and JavaScript files. Additionally, it houses the necessary runtime environment, such as libraries, frameworks, and specific configurations tailored to the application's needs.
Now let's see what the actual directory looks like:
my-node-app/
|- Dockerfile
|- index.html
|- styles.css
|- script.js
|- css/
| |- (other CSS files or directories)
|- js/
| |- (other JavaScript files or directories)
|- package.json
|- app.js
|- ...
Docker revolutionized this concept, simplifying the process of creating, deploying, and running these containers.
Within this Directory, We can see the Dockerfile, inside the file we have to copy all the required files:
# Use a basic web server as the base image
FROM nginx: alpine
# Set the working directory inside the container
WORKDIR /usr/share/nginx/html
# Copy the web application files to the container
COPY index.html .
COPY styles.css css/
COPY script.js js/
# Expose port 80 to allow access to the web server
EXPOSE 80
# Command to start the web server when the container launches
CMD ["nginx", "-g", "daemon off;"]
Docker: Docker is a platform and a set of tools used to create, manage, and run containers based on Docker images. The docker command-line interface is used to interact with Docker, allowing users to build images from Dockerfiles (docker build), run containers from images (docker run), manage images (docker images), and more.
Docker's Handling of Containers:
Docker facilitated container management, allowing them to start, stop, and move across different environments effortlessly. However, it introduced certain challenges:
Containers are transient, having short lifespans. They can die and revive unpredictably. The kernel's intervention in killing containers might impact other running containers, posing a single-host problem.
4 Major Problems:
Single-Host Issues:
Kernel interference affects other containers due to priority rules and lack of isolation. This limitation leads to challenges in auto-healing, auto-scaling, and providing enterprise-grade support.
Auto-healing:
Docker focuses on container execution but lacks native application health management capabilities. This makes recovering from application failures within containers complex without custom solutions.
Auto-scaling:
Docker can scale containers by running more of them but has no native cluster management features. Additional tools are required to automatically scale clusters at the infrastructure level.
Enterprise support:
As an open-source technology, Docker does not provide official enterprise support services. Commercial third-party solutions are needed for guaranteed SLAs and timely issue resolution.
Kubernetes Addressing Docker Challenges:
Kubernetes stands as a powerful solution to challenges encountered in Docker's single-host environment. It introduces a cluster-based architecture, comprising a master node orchestrating multiple worker nodes. This structure ensures fault tolerance, scalability, and efficient container management.
Handling Faulty Containers:
In Kubernetes, the master node monitors and manages containers across the cluster. It mitigates issues where a faulty container could affect others by utilizing its self-healing capabilities. Through ReplicaSets and Health Checks, Kubernetes ensures that unhealthy containers are replaced with healthy ones automatically.
Scaling and Auto-Healing:
ReplicaSets, part of Kubernetes' deployment controller, enables the automatic scaling of applications. This feature dynamically adjusts the number of running containers based on predefined metrics. Horizontal Pod Autoscaler (HPA) scales the number of pods in response to CPU or memory usage, optimizing resource utilization.
Handling Faulty Containers:
In Kubernetes, the master node monitors and manages containers across the cluster. It mitigates issues where a faulty container could affect others by utilizing its self-healing capabilities. Through ReplicaSets and Health Checks, Kubernetes ensures that unhealthy containers are replaced with healthy ones automatically.
Scaling and Auto-Healing:
ReplicaSets, part of Kubernetes' deployment controller, enables the automatic scaling of applications. This feature dynamically adjusts the number of running containers based on predefined metrics. Horizontal Pod Autoscaler (HPA) scales the number of pods in response to CPU or memory usage, optimizing resource utilization.
YAML Configuration for Handling:
In Kubernetes, YAML configuration files define desired states for applications. These files specify the number of replicas, resource constraints, scaling policies, and health checks. By updating these YAML files, administrators can adapt the application's behaviour and handling mechanisms.
In Kubernetes, YAML configuration files define desired states for applications. These files specify the number of replicas, resource constraints, scaling policies, and health checks. By updating these YAML files, administrators can adapt the application's behaviour and handling mechanisms.
KUBERNETES ARCHITECHTURE:
We will be discussing about the architechture and its components in detail on next article.(Part-2)



Comments
Post a Comment