Google Borg
   HOME

TheInfoList



OR:

Kubernetes (, commonly stylized as K8s) is an open-source
container A container is any receptacle or enclosure for holding a product used in storage, packaging, and transportation, including shipping. Things kept inside of a container are protected on several sides by being inside of its structure. The term ...
orchestration system for automating
software deployment Software deployment is all of the activities that make a software system available for use. The general deployment process consists of several interrelated activities with possible transitions between them. These activities can occur on the ...
, scaling, and management.
Google Google LLC () is an American Multinational corporation, multinational technology company focusing on Search Engine, search engine technology, online advertising, cloud computing, software, computer software, quantum computing, e-commerce, ar ...
originally designed Kubernetes, but the
Cloud Native Computing Foundation The Cloud Native Computing Foundation (CNCF) is a Linux Foundation project that was founded in 2015 to help advance container technology and align the tech industry around its evolution. It was announced alongside Kubernetes 1.0, an open sour ...
now maintains the project. Kubernetes works with
Containerd The Cloud Native Computing Foundation (CNCF) is a Linux Foundation project that was founded in 2015 to help advance container technology and align the tech industry around its evolution. It was announced alongside Kubernetes 1.0, an open sour ...
, and CRI-O. Originally, it interfaced exclusively with the Docker runtime through a "Dockershim"; however, from November 2020 up to April 2022, Kubernetes has deprecated the shim in favor of directly interfacing with the container through Containerd, or replacing Docker with a runtime that is compliant with the Container Runtime Interface (CRI). With the release of v1.24 in May 2022, "Dockershim" has been removed entirely.


History

Kubernetes ( κυβερνήτης, Greek for "
helmsman A helmsman or helm (sometimes driver) is a person who steers a ship, sailboat, submarine, other type of maritime vessel, or spacecraft. The rank and seniority of the helmsman may vary: on small vessels such as fishing vessels and yachts, the fu ...
," "pilot," or "governor", and the etymological root of cybernetics) was announced by Google in mid-2014. The project was created by Joe Beda, Brendan Burns, and Craig McLuckie, who were soon joined by other Google engineers, including Brian Grant and Tim Hockin. The design and development of Kubernetes was influenced by Google's
Borg The Borg are an alien group that appear as recurring antagonists in the ''Star Trek'' fictional universe. The Borg are cybernetic organisms (cyborgs) linked in a hive mind called "the Collective". The Borg co-opt the technology and knowledge ...
cluster manager. Many of its top contributors had previously worked on Borg; they codenamed Kubernetes "" after the ''Star Trek'' ex-
Borg The Borg are an alien group that appear as recurring antagonists in the ''Star Trek'' fictional universe. The Borg are cybernetic organisms (cyborgs) linked in a hive mind called "the Collective". The Borg co-opt the technology and knowledge ...
character
Seven of Nine Seven of Nine (born Annika Hansen) is a fictional character introduced in the American science fiction television series '' Star Trek: Voyager''. Portrayed by Jeri Ryan, she is a former Borg drone who joins the crew of the Federation starship ' ...
and gave its logo a seven-spoked wheel. Unlike Borg, which was written in
C++ C++ (pronounced "C plus plus") is a high-level general-purpose programming language created by Danish computer scientist Bjarne Stroustrup as an extension of the C programming language, or "C with Classes". The language has expanded significan ...
, Kubernetes source code is in the Go language. Kubernetes 1.0 was released on July 21, 2015. Google worked with the Linux Foundation to form the
Cloud Native Computing Foundation The Cloud Native Computing Foundation (CNCF) is a Linux Foundation project that was founded in 2015 to help advance container technology and align the tech industry around its evolution. It was announced alongside Kubernetes 1.0, an open sour ...
(CNCF) and offer Kubernetes as a seed technology. In February 2016, the Helm package manager for Kubernetes was released. Google was already offering managed Kubernetes services, while Red Hat was supporting Kubernetes as part of
OpenShift OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform — a hybrid cloud platform as a service built around Linux containers orchestrated and managed by Kuber ...
since the inception of the Kubernetes project in 2014. In 2017, the principal competitors rallied around Kubernetes and announced adding native support for it: * in August, VMWare (proponent of Pivotal Cloud Foundry) * in September, Mesosphere, Inc. (proponent of Marathon and Mesos) * in October,
Docker, Inc. Docker, Inc. is an American technology company that develops productivity tools built around Docker, which automates the deployment of code inside software containers. Major products of the company are Docker Hub, a central repository of contain ...
(proponent of Docker) * later the same October, Microsoft Azure * in November, AWS announced support for Kubernetes via the Elastic Container Service for Kubernetes (EKS) On March 6, 2018, Kubernetes Project reached ninth place in the list of
GitHub GitHub, Inc. () is an Internet hosting service for software development and version control using Git. It provides the distributed version control of Git plus access control, bug tracking, software feature requests, task management, continu ...
projects by the number of commits, and second place in authors and issues, after the Linux kernel. Until version 1.18, Kubernetes followed an N-2 support policy, meaning that the three most recent minor versions receive security updates and bug fixes. Starting with version 1.19, Kubernetes follows an N-3 support policy.


Concepts

Kubernetes defines a set of building blocks ("primitives") that collectively provide mechanisms that deploy, maintain, and scale applications based on CPU, memory or custom metrics. Kubernetes is loosely coupled and extensible to meet different workloads. The internal components as well as extensions and containers that run on Kubernetes rely on the Kubernetes API. The platform exerts its control over compute and storage resources by defining resources as Objects, which can then be managed as such. Kubernetes follows the primary/replica architecture. The components of Kubernetes can be divided into those that manage an individual
node In general, a node is a localized swelling (a "knot") or a point of intersection (a vertex). Node may refer to: In mathematics * Vertex (graph theory), a vertex in a mathematical graph *Vertex (geometry), a point where two or more curves, lines ...
and those that are part of the control plane.


Control plane

The Kubernetes master node handles the Kubernetes control plane of the cluster, managing its workload and directing communication across the system. The Kubernetes control plane consists of various components, each its own process, that can run both on a single master node or on multiple masters supporting
high-availability cluster High-availability clusters (also known as HA clusters, fail-over clusters) are groups of computers that support server applications that can be reliably utilized with a minimum amount of down-time. They operate by using high availability softwa ...
s. The various components of the Kubernetes control plane are as follows: *
etcd Container Linux (formerly CoreOS Linux) is a discontinued open-source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments, while focusing on automation, ease of application ...
is a persistent, lightweight, distributed, key-value data store that
CoreOS Container Linux (formerly CoreOS Linux) is a discontinued open-source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments, while focusing on automation, ease of application ...
has developed. It reliably stores the configuration data of the cluster, representing the overall state of the cluster at any given point of time. etcd favors consistency over availability in the event of a network partition (see CAP theorem). The consistency is crucial for correctly scheduling and operating services. * The API server serves the Kubernetes
API An application programming interface (API) is a way for two or more computer programs to communicate with each other. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how ...
using JSON over
HTTP The Hypertext Transfer Protocol (HTTP) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide We ...
, which provides both the internal and external interface to Kubernetes. The API server processes and validates
REST Rest or REST may refer to: Relief from activity * Sleep ** Bed rest * Kneeling * Lying (position) * Sitting * Squatting position Structural support * Structural support ** Rest (cue sports) ** Armrest ** Headrest ** Footrest Arts and enter ...
requests and updates the state of the
API An application programming interface (API) is a way for two or more computer programs to communicate with each other. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how ...
objects in etcd, thereby allowing clients to configure workloads and containers across worker nodes. The API server uses etcd's watch API to monitor the cluster, roll out critical configuration changes, or restore any divergences of the state of the cluster back to what the deployer declared. As an example, the deployer may specify that three instances of a particular "pod" (see below) need to be running. etcd stores this fact. If the Deployment Controller finds that only two instances are running (conflicting with the etcd declaration), it schedules the creation of an additional instance of that pod. * The scheduler is the extensible component that selects on which node an unscheduled pod (the basic entity managed by the scheduler) runs, based on resource availability. The scheduler tracks resource use on each node to ensure that workload is not scheduled in excess of available resources. For this purpose, the scheduler must know the resource requirements, resource availability, and other user-provided constraints or policy directives such as quality-of-service, affinity vs. anti-affinity requirements, and data locality. The scheduler's role is to match resource "supply" to workload "demand". * A controller is a reconciliation loop that drives the actual cluster state toward the desired state, communicating with the API server to create, update, and delete the resources it manages (e.g., pods or service endpoints). One kind of controller is a Replication Controller, which handles replication and scaling by running a specified number of copies of a pod across the cluster. It also handles creating replacement pods if the underlying node fails. Other controllers that are part of the core Kubernetes system include a DaemonSet Controller for running exactly one pod on every machine (or some subset of machines), and a Job Controller for running pods that run to completion, e.g. as part of a batch job. Labels selectors that are part of the controller's definition specify the set of pods that a controller manages. *The controller manager is a process that manages a set of core Kubernetes controllers.


Nodes

A node, also known as a worker or a minion, is a machine where containers (workloads) are deployed. Every node in the cluster must run a container runtime such as
containerd The Cloud Native Computing Foundation (CNCF) is a Linux Foundation project that was founded in 2015 to help advance container technology and align the tech industry around its evolution. It was announced alongside Kubernetes 1.0, an open sour ...
, as well as the below-mentioned components, for communication with the primary for network configuration of these containers. * Kubelet is responsible for the running state of each node, ensuring that all containers on the node are healthy. It takes care of starting, stopping, and maintaining application containers organized into pods as directed by the control plane. Kubelet monitors the state of a pod, and if not in the desired state, the pod re-deploys to the same node. Node status is relayed every few seconds via heartbeat messages to the primary. Once the primary detects a node failure, the Replication Controller observes this state change and launches pods on other healthy nodes. *Kube-proxy is an implementation of a network proxy and a
load balancer In computing, load balancing is the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient. Load balancing can optimize the response time and avoid unevenl ...
, and it supports the service abstraction along with other networking operation. It is responsible for routing traffic to the appropriate container based on IP and port number of the incoming request. * A container resides inside a pod. The container is the lowest level of a micro-service, which holds the running application, libraries, and their dependencies. Containers can be exposed to the world through an external IP address. Kubernetes has supported Docker containers since its first version. In July 2016 the rkt container engine was added.


Namespaces

Kubernetes provides a partitioning of the resources it manages into non-overlapping sets called namespaces. They are intended for use in environments with many users spread across multiple teams, or projects, or even separating environments like development, test, and production.


Pods

The basic scheduling unit in Kubernetes is a pod, which consists of one or more containers that are guaranteed to be co-located on the same node. Each pod in Kubernetes is assigned a unique IP address within the cluster, allowing applications to use ports without the risk of conflict. Within the pod, all containers can reference each other. A pod can define a volume, such as a local disk directory or a network disk, and expose it to the containers in the pod. Pods can be managed manually through the Kubernetes API, or their management can be delegated to a controller. Such volumes are also the basis for the Kubernetes features of ConfigMaps (to provide access to configuration through the file system visible to the container) and Secrets (to provide access to credentials needed to access remote resources securely, by providing those credentials on the file system visible only to authorized containers).


DaemonSets

Normally, the Kubernetes Scheduler decides where to run pods. For some use cases, though, there could be a need to run a pod on every single node in the cluster. This is useful for use cases like log collection, ingress controllers, and storage services. DaemonSets implement this kind of pod scheduling.


ReplicaSets

A ReplicaSet’s purpose is to maintain a stable set of replica pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods. The ReplicaSets can also be said to be a grouping mechanism that lets Kubernetes maintain the number of instances that have been declared for a given pod. The definition of a ReplicaSet uses a selector, whose evaluation will result in identifying all pods that are associated with it.


Services

A Kubernetes service is a set of pods that work together, such as one tier of a multi-tier application. The set of pods that constitute a service are defined by a label selector. Kubernetes provides two modes of
service discovery Service discovery is the process of automatically detecting devices and services on a computer network. This reduces the need for manual configuration by users and administrators. A service discovery protocol (SDP) is a network protocol that hel ...
, using environmental variables or using Kubernetes DNS. Service discovery assigns a stable IP address and
DNS name The Domain Name System (DNS) is a hierarchical and distributed naming system for computers, services, and other resources in the Internet or other Internet Protocol (IP) networks. It associates various information with domain names assigned t ...
to the service, and load balances traffic in a round-robin manner to network connections of that IP address among the pods matching the selector (even as failures cause the pods to move from machine to machine). By default a service is exposed inside a cluster (e.g., back end pods might be grouped into a service, with requests from the front-end pods load-balanced among them), but a service can also be exposed outside a cluster (e.g., for clients to reach front-end pods).


Volumes

File systems in the Kubernetes container provide ephemeral storage, by default. This means that a restart of the pod will wipe out any data on such containers, and therefore, this form of storage is quite limiting in anything but trivial applications. A Kubernetes Volume provides persistent storage that exists for the lifetime of the pod itself. This storage can also be used as shared disk space for containers within the pod. Volumes are mounted at specific mount points within the container, which are defined by the pod configuration, and cannot mount onto other volumes or link to other volumes. The same volume can be mounted at different points in the file system tree by different containers.


ConfigMaps and secrets

A common application challenge is deciding where to store and manage configuration information, some of which may contain sensitive data. Configuration data can be anything as fine-grained as individual properties or coarse-grained information like entire configuration files or JSON / XML documents. Kubernetes provides two closely related mechanisms to deal with this need: "configmaps" and "secrets", both of which allow for configuration changes to be made without requiring an application build. The data from configmaps and secrets will be made available to every single instance of the application to which these objects have been bound via the deployment. A secret and/or a configmap is sent to a node only if a pod on that node requires it. Kubernetes will keep it in memory on that node. Once the pod that depends on the secret or configmap is deleted, the in-memory copy of all bound secrets and configmaps are deleted as well. The data is accessible to the pod through one of two ways: a) as environment variables (which will be created by Kubernetes when the pod is started) or b) available on the container file system that is visible only from within the pod. The data itself is stored on the master which is a highly secured machine which nobody should have login access to. The biggest difference between a secret and a configmap is that the content of the data in a secret is base64 encoded. Recent versions of Kubernetes have introduced support for encryption to be used as well. Secrets are often used to store data like certificates, passwords, and ssh keys.


StatefulSets

Scaling stateless applications is only a matter of adding more running pods. Stateful workloads are harder, because the state needs to be preserved if a pod is restarted. If the application is scaled up or down, the state may need to be redistributed. Databases are an example of stateful workloads. When run in high-availability mode, many databases come with the notion of a primary instance and secondary instances. In this case, the notion of ordering of instances is important. Other applications like Apache Kafka distribute the data amongst their brokers; hence, one broker is not the same as another. In this case, the notion of instance uniqueness is important. StatefulSets are controllers (see above) that enforce the properties of uniqueness and ordering amongst instances of a pod and can be used to run stateful applications.


Replication controllers and deployments

A ''ReplicaSet'' declares the number of instances of a pod that is needed, and a Replication Controller manages the system so that the number of healthy pods that are running matches the number of pods declared in the ReplicaSet (determined by evaluating its selector). Deployments are a higher level management mechanism for ReplicaSets. While the Replication Controller manages the scale of the ReplicaSet, Deployments will manage what happens to the ReplicaSet - whether an update has to be rolled out, or rolled back, etc. When deployments are scaled up or down, this results in the declaration of the ReplicaSet changing - and this change in declared state is managed by the Replication Controller.


Labels and selectors

Kubernetes enables clients (users or internal components) to attach keys called "labels" to any API object in the system, such as pods and nodes. Correspondingly, "label selectors" are queries against labels that resolve to matching objects. When a service is defined, one can define the label selectors that will be used by the service router/load balancer to select the pod instances that the traffic will be routed to. Thus, simply changing the labels of the pods or changing the label selectors on the service can be used to control which pods get traffic and which don't, which can be used to support various deployment patterns like
blue-green deployment In software engineering Software engineering is a systematic engineering approach to software development. A software engineer is a person who applies the principles of software engineering to design, develop, maintain, test, and evaluate comp ...
s or A-B testing. This capability to dynamically control how services utilize implementing resources provides a loose coupling within the infrastructure. For example, if an application's pods have labels for a system tier (with values such as frontend, backend, for example) and a release_track (with values such as
canary Canary originally referred to the island of Gran Canaria on the west coast of Africa, and the group of surrounding islands (the Canary Islands). It may also refer to: Animals Birds * Canaries, birds in the genera ''Serinus'' and ''Crithagra'' i ...
, production, for example), then an operation on all of backend and canary nodes can use a label selector, such as:
tier=backend AND release_track=canary
Just like labels, field selectors also let one select Kubernetes resources. Unlike labels, the selection is based on the attribute values inherent to the resource being selected, rather than user-defined categorization. metadata.name and metadata.namespace are field selectors that will be present on all Kubernetes objects. Other selectors that can be used depend on the object/resource type.


Add-ons

Add-ons operate just like any other application running within the cluster: they are implemented via pods and services, and are only different in that they implement features of the Kubernetes cluster. The pods may be managed by Deployments, ReplicationControllers, and so on. There are many add-ons, and the list is growing. Some of the more important are: * DNS: All Kubernetes clusters should have cluster DNS; it is a mandatory feature. Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. * Web UI: This is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself. * Container Resource Monitoring: Providing a reliable application runtime, and being able to scale it up or down in response to workloads, means being able to continuously and effectively monitor workload performance. Container Resource Monitoring provides this capability by recording metrics about containers in a central database, and provides a UI for browsing that data. The cAdvisor is a component on a slave node that provides a limited metric monitoring capability. There are full metrics pipelines as well, such as Prometheus, which can meet most monitoring needs. * Container Cost Monitoring: Kubernetes cost monitoring apps allows to brake down costs by pods, nodes, namespaces, and labels. Three crucial metrics to track is daily cloud spend, cost per provisioned and requested CPU, historical cost allocation. * Cluster-level logging: Logs should have a separate storage and lifecycle independent of nodes, pods, or containers. Otherwise, node or pod failures can cause loss of event data. The ability to do this is called cluster-level logging, and such mechanisms are responsible for saving container logs to a central log store with search/browsing interface. Kubernetes provides no native storage for log data, but one can integrate many existing logging solutions into the Kubernetes cluster.


Storage

Containers emerged as a way to make software portable. The container contains all the packages you need to run a service. The provided file system makes containers extremely portable and easy to use in development. A container can be moved from development to test or production with no or relatively few configuration changes. Historically Kubernetes was suitable only for stateless services. However, many applications have a database, which requires persistence, which leads to the creation of persistent storage for Kubernetes. Implementing persistent storage for containers is one of the top challenges of Kubernetes administrators, DevOps and cloud engineers. Containers may be ephemeral, but more and more of their data is not, so one needs to ensure the data's survival in case of container termination or hardware failure. When deploying containers with Kubernetes or containerized applications, companies often realize that they need persistent storage. They need to provide fast and reliable storage for databases, root images and other data used by the containers. In addition to the landscape, the Cloud Native Computing Foundation (CNCF), has published other information about Kubernetes Persistent Storage including a blog helping to define the container attached storage pattern. This pattern can be thought of as one that uses Kubernetes itself as a component of the storage system or service. More information about the relative popularity of these and other approaches can be found on the CNCF's landscape survey as well, which showed that OpenEBS from MayaData and Rook - a storage orchestration project - were the two projects most likely to be in evaluation as of the Fall of 2019. Container Attached Storage is a type of data storage that emerged as Kubernetes gained prominence. The Container Attached Storage approach or pattern relies on Kubernetes itself for certain capabilities while delivering primarily block, file, object and interfaces to workloads running on Kubernetes. Common attributes of Container Attached Storage include the use of extensions to Kubernetes, such as custom resource definitions, and the use of Kubernetes itself for functions that otherwise would be separately developed and deployed for storage or data management. Examples of functionality delivered by custom resource definitions or by Kubernetes itself include retry logic, delivered by Kubernetes itself, and the creation and maintenance of an inventory of available storage media and volumes, typically delivered via a custom resource definition.


Container Storage Interface (CSI)

In Kubernetes version 1.9, the initial Alpha release of Container Storage Interface (CSI) was introduced. Previously, storage volume plug-ins were included in the Kubernetes distribution. By creating a standardized CSI, the code required to interface with external storage systems was separated from the core Kubernetes code base. Just one year later, the CSI feature was made Generally Available (GA) in Kubernetes.


API

A key component of the Kubernetes control plane is the API Server, which exposes an HTTP API that can be invoked by other parts of the cluster as well as end users and external components. This API is a
REST Rest or REST may refer to: Relief from activity * Sleep ** Bed rest * Kneeling * Lying (position) * Sitting * Squatting position Structural support * Structural support ** Rest (cue sports) ** Armrest ** Headrest ** Footrest Arts and enter ...
API and is declarative in nature. There are two kinds of API resources. Most of the API resources in the Kubernetes API are objects. These represent a concrete instance of a concept on the cluster, like a pod or namespace. A small number of API resource types are "virtual". These represent operations rather than objects, such as a permission check, using the "subjectaccessreviews" resource. API resources that correspond to objects will be represented in the cluster with unique identifiers for the objects. Virtual resources do not have unique identifiers.


Operators

Kubernetes can be extended using Custom Resources. These API resources represent objects that are not part of the standard Kubernetes product. These resources can appear and disappear in a running cluster through dynamic registration. Cluster administrators can update Custom Resources independently of the cluster. Custom Controllers are another extension mechanism. These interact with Custom Resources, and allow for a true declarative API that allows for the lifecycle management of Custom Resource that is aligned with the way that Kubernetes itself is designed. The combination of Custom Resources and Custom Controllers are often referred to as an (Kubernetes) Operator. The key use case for Operators are to capture the aim of a human operator who is managing a service or set of services and to implement them using automation, and with a declarative API supporting this automation. Human operators who look after specific applications and services have deep knowledge of how the system ought to behave, how to deploy it, and how to react if there are problems. Examples of problems solved by Operators include taking and restoring backups of that application's state, and handling upgrades of the application code alongside related changes such as database schemas or extra configuration settings.


Cluster API

The same API design principles have been used to define an API to programmatically create, configure, and manage Kubernetes clusters. This is called the Cluster API. A key concept embodied in the API is using Infrastructure as Software, or the notion that the Kubernetes cluster infrastructure is itself a resource / object that can be managed just like any other Kubernetes resources. Similarly, machines that make up the cluster are also treated as a Kubernetes resource. The API has two pieces - the core API, and a provider implementation. The provider implementation consists of cloud-provider specific functions that let Kubernetes provide the cluster API in a fashion that is well-integrated with the cloud-provider's services and resources.


Uses

Kubernetes is commonly used as a way to host a microservice-based implementation, because it and its associated ecosystem of tools provide all the capabilities needed to address key concerns of any microservice architecture. It is available in three forms: open source, commercial, and managed. Open source distributions include the original Kubernetes, Amazon EKS-D, Red Hat OpenShift, VMware Tanzu, Mirantis Kubernetes Engine, and D2iQ Kubernetes Platform. Managed offerings include GKE, Oracle Container Engine for Kubernetes, Amazon Elastic Kubernetes Service, IBM Kubernetes Service, and Platform9 Managed Kubernetes.


Distributions

Various vendors offer Kubernetes-based platforms or infrastructure as a service (IaaS) that deploy Kubernetes. These include: * Alibaba_Cloud, Alibaba Cloud ACK (Alibaba Cloud Container Service for Kubernetes) * Amazon (company), Amazon EKS (Elastic Kubernetes Service) * Canonical (company), Canonical MicroK8s and Charmed Kubernetes * DigitalOcean managed Kubernetes Service *
Google Google LLC () is an American Multinational corporation, multinational technology company focusing on Search Engine, search engine technology, online advertising, cloud computing, software, computer software, quantum computing, e-commerce, ar ...
GKE (Google Kubernetes Engine) * IBM Cloud Kubernetes Services * Microsoft AKS (Azure Kubernetes Services) * Mirantis K0s * Oracle Cloud, Oracle Container Engine for Kubernetes * Red Hat Openshift * SUSE Rancher, Rancher Kubernetes Engine (RKE) * VMware Tanzuq


Release timeline


Support windows

The chart below visualises the period for which each release is/was supported ImageSize = width:1000 height:auto barincrement:35 PlotArea = left:100 right:50 bottom:30 top:10 DateFormat = dd/mm/yyyy Period = from:01/12/2018 till:01/01/2025 TimeAxis = orientation:horizontal ScaleMajor = unit:year increment:1 start:2019 ScaleMinor = unit:month increment:1 start:01/01/2019 Define $dx = 25 # shift text to right side of bar Colors = id:out_of_support value:rgb(0.992,0.702,0.671) legend:Out_of_support id:in-support value:rgb(0.996,0.973,0.776) legend:In_support id:latest value:rgb(0.831,0.957,0.706) legend:Latest_stable_version id:prerelease value:rgb(0.996,0.82,0.627) legend:Preview_version PlotData= mark:(line,black) fontsize:S bar:1.26.x from:09/12/2022 till:24/02/2024 text:1.26.x color:latest bar:1.25.x from:23/08/2022 till:27/10/2023 text:1.25.x color:in-support bar:1.24.x from:03/05/2022 till:28/07/2023 text:1.24.x color:in-support bar:1.23.x from:07/12/2021 till:28/02/2023 text:1.23.x color:in-support bar:1.22.x from:04/08/2021 till:28/10/2022 text:1.22.x color:out_of_support bar:1.21.x from:08/04/2021 till:28/06/2022 text:1.21.x color:out_of_support bar:1.20.x from:08/12/2020 till:28/02/2022 text:1.20.x color:out_of_support bar:1.19.x from:26/08/2020 till:28/10/2021 text:1.19.x color:out_of_support bar:1.18.x from:25/03/2020 till:30/04/2021 text:1.18.x color:out_of_support bar:1.17.x from:09/12/2019 till:30/01/2021 text:1.17.x color:out_of_support bar:1.16.x from:18/09/2019 till:25/08/2020 text:1.16.x color:out_of_support bar:1.15.x from:19/06/2019 till:23/03/2020 text:1.15.x color:out_of_support bar:1.14.x from:25/03/2019 till:09/12/2019 text:1.14.x color:out_of_support bar:1.13.x from:03/12/2018 till:18/09/2019 text:1.13.x color:out_of_support


See also

*List of cluster management software *Open Service Mesh *
OpenShift OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform — a hybrid cloud platform as a service built around Linux containers orchestrated and managed by Kuber ...
*Docker (software)


References


External links

* * {{Authority control 2014 software Cloud infrastructure Containerization software Free software for cloud computing Free software programmed in Go Linux containerization Linux Foundation projects Software using the Apache license Virtualization software for Linux Orchestration software