본문 바로가기
Server

Kubernetes

by Doromi 2024. 4. 1.
728x90
반응형

Kubernetes is an open-source container orchestration platform.

It automates the deployment, scaling, and management of containerized applications.

 

A Kubernetes cluster is a set of machines, called nodes, that are used to run containerized applications.

There are two core pieces in a Kubernetes cluster.

The first is the control plane.

It is responsible for managing the state of the cluster.

In PRD environments, the control plane usually runs on multiple nodes that span across several data center zones.

It consists of a number of core components.

They are the API server, etcd, scheduler, and the controller manager.

The API server is the primary interface between the control plane and the rest of the cluster.

It exposes a RESTful API that allows cllients to interact with the control plane and submit requests to manage the cluster.

etcd is a distributed key-value store.

It stores the cluster's persistent state.

It is used by the API server and other components of the control plane to store and retrieve information about the cluster.

The scheduler is responsible for scheduling pods onto the worker nodes in the cluster.

It uses information about the resources required by the pods and the available resources on the worker nodes to make placement decisions.

The controller manager is responsible for running controllers that manage the state of the cluster.

 

The second is a set of worker nodes.

These nodes run the containerized application workloads.

The containerized applications run in a Pod.

Pods are the smallest deployable units in Kubernetes.

A pod hosts one or more containers and provides shared storage and networking for those containers.

Pods are created and managed by the Kubernetes control plane.

They are the basic building blocks of Kubernetes applications.

 

The core components of Kubernetes that run on the worker nodes include kubelet, container runtime, and kube proxy.

 

The kubelet is a daemon that runs on each worker node.

It is responsible for communicating with the control plane.

It receives instructions from the control plane about which pods to run on the node, and ensures that the desired state of the pods is maintained.

 

The container runtime runs the containers on the worker nodes.

It is responsible for pulling the container images from a registry, starting and stopping the containers, and managing the containers' resources.

 

The kube-proxy is a network proxy that runs on each worker node.

It is responsible for routing traffic to the correct pods.

It also provides load balancing for the pods and ensures that traffic is distributed evenly across the pods.

 

So when should we use Kubernetes?

Kubernetes is scalable and highly available.

It provides features like self-healing, automatic rollbacks, and horizontal scaling.

It makes it easy to scale our applications up and down as needed, allowing us to respond to changes in demand quickly.

It helps us deploy and manage appllications in a consistent and reliable way regardless of the underlying infrastructure.

It runs on-premise, in a public cloud, or in a hybrid environment.

 

Reference:ByteByteGo

728x90
반응형

'Server' 카테고리의 다른 글

배포 전략  (0) 2024.04.20
CI/CD  (0) 2024.04.08
14. 유저 세션(3)  (0) 2018.01.18
13. 유저 세션(2)  (0) 2018.01.15
12. 유저 세션(1)  (0) 2018.01.15