What is Kubernetes and How it's Work ?

Kubernetes is an open-source container orchestration platform that automates the deployment, management, and scaling of applications. Learn how Kubernetes enables cost-effective cloud-native development.

What is Kubernetes?

Kubernetes — also known as “k8s” or “Kube” — is a container orchestration platform for scheduling and automating the deployment, management, and scaling of containerized applications.

Kubernetes was first developed by engineers at Google before being open-sourced in 2014. It is a descendant of Borg, a container orchestration platform used internally at Google. Kubernetes is Greek for helmsman or pilot, hence the helm in the Kubernetes logo (link resides outside IBM).

Today, Kubernetes and the broader container ecosystem are maturing into a general-purpose computing platform and ecosystem that rivals — if not surpasses — virtual machines (VMs) as the basic building blocks of modern cloud infrastructure and applications. This ecosystem enables organizations to deliver a high-productivity Platform-as-a-Service (PaaS) that addresses multiple infrastructure-related and operations-related tasks and issues surrounding cloud-native development so that development teams can focus solely on coding and innovation.

What are containers?



Containers are lightweight, executable application components that combine application source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.

Containers take advantage of a form of operating system (OS) virtualization that lets multiple applications share a single instance of an OS by isolating processes and controlling the amount of CPU, memory, and disk those processes can access. Because they are smaller, more resource-efficient, and more portable than virtual machines (VMs), containers have become the de facto compute units of modern cloud-native applications

In a recent IBM study (PDF, 1.4 MB) users reported several specific technical and business benefits resulting from their adoption of containers and related technologies.

Containers vs. virtual machines vs. traditional infrastructure

It may be easier or more helpful to understand containers as the latest point on the continuum of IT infrastructure automation and abstraction.

In traditional infrastructure, applications run on a physical server and grab all the resources they can get. This leaves you the choice of running multiple applications on a single server and hoping one doesn’t hog resources at the expense of the others or dedicating one server per application, which wastes resources and doesn’t scale.

Virtual machines (VMs) are servers abstracted from the actual computer hardware, enabling you to run multiple VMs on one physical server or a single VM that spans more than one physical server. Each VM runs its own OS instance, and you can isolate each application in its own VM, reducing the chance that applications running on the same underlying physical hardware will impact each other. VMs make better use of resources and are much easier and more cost-effective to scale than traditional infrastructure. And, they’re disposable — when you no longer need to run the application, you take down the VM.

For more information on VMs, see "Virtual Machines: An Essential Guide."

Containers take this abstraction to a higher level — specifically, in addition to sharing the underlying virtualized hardware, they share an underlying, virtualized OS kernel as well. Containers offer the same isolation, scalability, and disposability of VMs, but because they don’t carry the payload of their own OS instance, they’re lighter weight (that is, they take up less space) than VMs. They’re more resource-efficient — they let you run more applications on fewer machines (virtual and physical), with fewer OS instances. Containers are more easily portable across desktop, data center, and cloud environments. And they’re an excellent fit for Agile and DevOps development practices.

"Containers: An Essential Guide" gives a complete explanation of containers and containerization. And the blog post "Containers vs. VMs: What's the difference?" gives a full rundown of the differences.

What is Docker?



Docker is the most popular tool for creating and running Linux® containers. While early forms of containers were introduced decades ago (with technologies such as FreeBSD Jails and AIX Workload Partitions), containers were democratized in 2013 when Docker brought them to the masses with a new developer-friendly and cloud-friendly implementation.

Docker began as an open-source project, but today it also refers to Docker Inc., the company that produces Docker — a commercial container toolkit that builds on the open-source project (and contributes those improvements back to the open-source community).

Docker was built on traditional Linux container (LXC) technology, but enables more granular virtualization of Linux kernel processes and adds features to make containers easier for developers to build, deploy, manage, and secure.

While alternative container platforms exist today (such as Open Container Initiative (OCI), CoreOS, and Canonical (Ubuntu) LXD), Docker is so widely preferred that it is virtually synonymous with containers and is sometimes mistaken as a competitor to complementary technologies such as Kubernetes (see the video “Kubernetes vs, Docker: It’s Not an Either/Or Question” further below).

What does Kubernetes do?



Kubernetes schedules and automates container-related tasks throughout the application lifecycle, including:

  • DeploymentDeploy a specified number of containers to a specified host and keep them running in the desired state.
  • Rollouts: A rollout is a change to a deployment. Kubernetes lets you initiate, pause, resume, or roll back rollouts.
  • Service discoveryKubernetes can automatically expose a container to the internet or to other containers using a DNS name or IP address.
  • Storage provisioningSet Kubernetes to mount persistent local or cloud storage for your containers as needed.
  • Load balancingBased on CPU utilization or custom metrics, Kubernetes load balancing can distribute the workload across the network to maintain performance and stability. 
  • AutoscalingWhen traffic spikes, Kubernetes autoscaling can spin up new clusters as needed to handle the additional workload.
  • Self-healing for high availabilityWhen a container fails, Kubernetes can restart or replace it automatically to prevent downtime. It can also take down containers that don’t meet your health-check requirements.

  • Kubernetes vs. Docker

    If you’ve read this far, you already understand that while Kubernetes is an alternative to Docker Swarm, it is not (contrary to persistent popular misconception) an alternative or competitor to Docker itself.

    In fact, if you’ve enthusiastically adopted Docker and are creating large-scale Docker-based container deployments, Kubernetes orchestration is a logical next step for managing these workloads. 

    Kubernetes architecture

    The chief components of Kubernetes architecture include the following:

    Clusters and nodes (compute)

    Clusters are the building blocks of Kubernetes architecture. The clusters are made up of nodes, each of which represents a single compute host (virtual or physical machine).

    Each cluster consists of a master node that serves as the control plan for the cluster, and multiple worker nodes that deploy, run, and manage containerized applications. The master node runs a scheduler service that automates when and where the containers are deployed based on developer-set deployment requirements and available computing capacity. Each worker node includes the tool that is being used to manage the containers — such as Docker — and a software agent called a Kubelet that receives and executes orders from the master node.

    Developers manage cluster operations using kubectl, a command-line interface (cli) that communicates directly with the Kubernetes API. 

    For a deeper dive into Kubernetes clusters, check out this blog post: “Kubernetes Clusters: Architecture for Rapid, Controlled Cloud App Delivery.”

    Pods and deployments (software)

    Pods are groups of containers that share the same compute resources and the same network. They are also the unit of scalability in Kubernetes: if a container in a pod is getting more traffic than it can handle, Kubernetes will replicate the pod to other nodes in the cluster. For this reason, it’s a good practice to keep pods compact so that they contain only containers that must share resources.

    The deployment controls the creation and state of the containerized application and keeps it running. It specifies how many replicas of a pod should run on the cluster. If a pod fails, the deployment will create a new one.


Ask any Doubt related to this site...

إرسال تعليق

Ask any Doubt related to this site...

Post a Comment (0)

أحدث أقدم