Skip to main content

Basic Concepts of the Kubernetes

Handling large software which has multiple services is a tedious, time-consuming task for DevOps engineer. Microservices comes into the rescue DevOps engineers from all these complicated deployment processes. Simply, each microservice in the system has it own responsibility to handle one specific task. The container can be used to deploy each of these micro-tasks as a unit of service. If you are not that familiar with Containers, read this article to get to know about Docker, Which is the most popular and widely used container technology to deploy microservices.

As I described early, we can use single container to deploy a single service and container contain all required configurations and dependencies. Single service always faces a common problem of a single point of failure. In order to avoid single point failure, we need to set up another service such that if one service is getting down, next available service takes that load and continue to provide the service. Another requirement to have multiple containers for the same service is that, distributing the load between the services. This can be achieved by connecting multiple services through a load balancer. To maintain multiple containers which have multiple services with service replication is not that easy task to handle manually. Kubernetes used to handle all of these complexities for you. Kubernetes provide multiple features so that you can easily maintain multiple containers which known as Container orchestration.

What Kubernetes do?

Imagine you have an application which has multiple services and each of these services configured inside a container. Let assume these two services as Service A and Service B. Also, Service A use Service B to get done some works. So, we can represent service dependency as follows.


When high availability required, then we need to scale the system so that each service have a copy of its own and run these copy in another node( separate physical/ virtual machine).


Here, load balancer used to distribute the load between servers. In this system, single point failure handled by routing traffic to other node if one node is getting down.
In lager system which has lots of nodes and services, hardware utilization may be not efficient since each of service requires different hardware requirements. Therefore hardwiring services into a specific node is not that efficient. Kubernetes provide an elegant way to solve this resource utilization issues by orchestrating container services in multiple nodes.
Kubernetes cluster maintains by the master which include scheduling applications, maintaining applications’ desired state, scaling applications, and rolling out new updates. A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Node and master communicate with each other through the Kubernetes API. A Kubernetes pod is a group of containers that are deployed together on the same host.


Pod

A pod is a collection of containers and the unit of deployment in Kubernetes cluster. Each of the pod having its own IP address. Which means that, each containers in the same pod have same IP address so that they can find each other with localhost.

Services

Since pods are dynamically changing, it is hard to reference individual pod. Services providing an abstraction over Pods and provide an addressable method of communicating with pods.

Ingress

Most of the time pods and services are encapsulated in inside the Kubernetes cluster so that external client cannot call these servers. An Ingress is a collection of rules that allow inbound connections to reach the cluster services.

Docker

A Docker Daemon is running in each node to pull images from the Docker registry and runt it.

Kubelet

Kubelet is the node agent that runs periodically to checks the health of the containers in a pods. API server sends instruction that required to run containers and kubelet make sure containers in desired state.

Kube-proxy

Kube-proxy distribute the load to the pods. Load distribution based on either iptable rules or round robin method.

Deployment

The deployment is what you use to describes your desired state to Kubernetes.

Features of Kubernetes

Kubernetes provide multiple features so that application deployer can easily deploy and maintain the whole system.
  • Control replication
This component allows maintaining the number of replicated pods that need to keep in Kubernetes cluster.
  • Resource Monitoring
Health and the performance of the cluster can be measure by using add ons such as Heapster. This will collect the metrics from the cluster and save stats in InfluxDB. Data can be visualized by using Grafana which is ideal UI to analyze these data.
  • Horizontal auto scaling
Heapster data also useful when scaling the system when high load comes into the system. Number of pods can be increase or decrease according the load of the system.
  • Collecting logs
Collecting log is important to check the status of the containers. Fluentd used along with Elastic Search and Kibana to read the logs from the containers.

See you in next article to get hands-on experience of Kubernetes. Cheers :).

Comments

Post a Comment

Popular posts from this blog

Gentle Introduction to the Envoy Proxy and Load-balancing

For a devops engineer, load balancing is a popular word. You need to figure out a way to scale the system so that it can manage it correctly when enormous traffic enters your system. One alternative is to boost the running single node’s efficiency. Adding more nodes and distributing the job among these nodes is another option. Having many nodes has another high availability added benefit. Envoy proxy is a proxy service that in the growing trend has been used as a service mesh. In this blog post, we’ll see the load balancing aspect of the Envoy Proxy. Load Balancers Load balancers is an endpoint that listens to the request that comes into the computation cluster. When application enters the Load Balancer, it checks for accessible worker nodes and distributes requests among worker nodes. Load balancer has the following characteristics. Service Discovery: Check available worker nodes Health check: Regularly inspect worker nodes health. Load balancing: Distribute the reque

Database Internel Architecture: SQLite

Introduction A database is an essential part of building a software system which used to store and read data efficiently. Here, We are going to discuss some architectural details of database implementation by using an early version of SQLite. SQLite is a small database application which used in millions of software and devices. SQLite invented by D.Richard Hipp in August 2000. SQLite is a high performance, lightweight relational database. If you are willing to learn internal of a database in coding level, then SQLite is the best open source database available out there with highly readable source code with lots of documentation. Reading later versions of SQLite become a little harder since it contains lots of new features. In order to understand the basic implementation of database internals, You should have good knowledge about data structures, some knowledge about Theory of Computing and how an operating system works. Here we are looking into the SQLite 2.5.0 version. Here

Weird Programming Languages

There are thousands of programming languages are invented and only about hundred of programming languages are commonly used to build software. Among this thousands of programming languages, there are some weird type of programming languages can be also found. These programming languages are seems to be called weird, since their programming syntax and the way it represent its code. In this blog we will look into some of these language syntax. Legit Have you ever wonder, when you come to a Github project that print hello world program, but you cannot see any codes or any content. Check this link  https://github.com/blinry/legit-hello  and you will see nothing in this repository. But trust me, there is hidden code in this project. If you see the  commit  section, you can reveal the magic. Yeah, you are right. Its storing hello world code in inside the git commit history. If you clone this project and run the following command, then you can see the hidden code in this project. g