OpenShift – Architecture

OpenShift - Architecture

OpenShift Architecture is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker clusters. The architecture of OpenShift is designed in such a way that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes. Unlike the earlier version of OpenShift V2, the new version of OpenShift V3 supports containerized infrastructure. In this model, Docker helps in the creation of lightweight Linux-based containers and Kubernetes supports the task of orchestrating and managing containers on multiple hosts.

OpenShift - Architecture

Components of OpenShift Architecture

One of the key components of OpenShift architecture is to manage containerized infrastructure in Kubernetes. Kubernetes is responsible for the Deployment and Management of infrastructure. In any Kubernetes cluster, we can have more than one master and multiple nodes, which ensures there is no point of failure in the setup.

OpenShift - Architecture

Kubernetes Master Machine Components

Etcd − It stores the configuration information, which can be used by each of the nodes in the cluster. It is a high availability key-value store that can be distributed among multiple nodes. It should only be accessible by the Kubernetes API server as it may have sensitive information. This is a distributed key-value store that is accessible to all.

API Server − Kubernetes is an API server that provides all the operations on clusters using the API. API server implements an interface which means different tools and libraries can readily communicate with it. A kubeconfig is a package along with the server-side tools that can be used for communication. It exposes Kubernetes API”.

Controller Manager − This component is responsible for most of the collectors that regulate the state of the cluster and perform a task. It can be considered as a daemon that runs in a non-terminating loop and is responsible for collecting and sending information to the API server. It works towards getting the shared state of the cluster and then make changes to bring the current status of the server to the desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account, controller. The controller manager runs a different kind of controllers to handle nodes, endpoint, etc.

Scheduler − It is a key component of the Kubernetes master. It is a service in the master which is responsible for distributing the workload. It is responsible for tracking the utilization of working load on cluster nodes and then placing the workload on which resources are available and accepting the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating a pod to a new node.

Kubernetes Node Components

Following are the key components of the Node server, which are necessary to communicate with the Kubernetes master.

Docker − The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.

Kubelet Service − This is a small service in each node, which is responsible for relaying information to and from the control plane service. It interacts with the etc store to read the configuration details and Wright values. This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc.

Kubernetes Proxy Service − This is a proxy service that runs on each node and helps in making the services available to the external host. It helps in forwarding the request to correct containers. Kubernetes Proxy Service is capable of carrying out primitive load balancing. It makes sure that the networking environment is predictable and accessible but at the same time it is isolated as well. It manages pods on nodes, volumes, secrets, creating new containers health checkups, etc.

Integrated OpenShift Container Registry

OpenShift container registry is an inbuilt storage unit of Red Hat, which is used for storing Docker images. With the latest integrated version of OpenShift, it has come up with a user interface to view images in OpenShift internal storage. These registries are capable of holding images with specified tags, which are later used to build containers out of them.

Frequently Used Terms

Image − Kubernetes (Docker) images are the key building blocks of Containerized Infrastructure. As of now, Kubernetes only supports Docker images. Each container in a pod has its Docker image running inside it. When configuring a pod, the image property in the configuration file has the same syntax as the Docker command.

Project − They can be defined as the renamed version of the domain which was present in the earlier version of OpenShift V2.

Container − They are the ones that are created after the image is deployed on a Kubernetes cluster node.

Node − A node is a working machine in the Kubernetes cluster, which is also known as a minion for master. They are working units that can be a physical, VM, or cloud instance.

Pod − A pod is a collection of containers and their storage inside a node of a Kubernetes cluster. It is possible to create a pod with multiple containers inside it. For example, keeping the database container and web server container inside the pod.

Next Topic – Click Here

This Post Has One Comment

Leave a Reply