Control plane kubernetes7/25/2023 ![]() The users are also responsible for adding and removing worker nodes which they can automate via Kubernetes cluster autoscaler and AWS autoscaling groups (ASG). ![]() The users can use a command-line interface (CLI) or a console hosted in the control plane to launch Kubernetes pods. For security reasons and administrative control, the control plane and the worker nodes are separated into two Virtual Private Clouds (VPCs) connected by Elastic Network Interfaces (ENI) that act as configurable gatekeepers allowing access to the Kubernetes APIs running on the nodes. Hosted Control PlaneĪs part of a hosted control plane offering and using AWS as an example, the service provider operates, scales, and upgrades the software running the control plane without any downtime so customers can focus on the worker nodes that host the application workloads. Google has blurred the AWS boundaries with Cloud Run and Autopilot. This aspect is where the lines blur between FaaS and a container service (more on this later in the article). Google Cloud Run offers a serverless functionality similar to AWS Lambda but exposes the container used to provision the run-time environment. Google Cloud Run, launched in 2019, blurred the lines between a hosted container service and FaaS. A key advantage of FaaS is that cloud providers charge for usage only when a function actively runs (and doesn’t charge any fees while it’s idle). AWS Lambda was the first cloud offering in 2014 that started a serverless paradigm, also known as Function as a Service (FaaS), whereby developers provide the software code without worrying about any infrastructure components required to run the code. The timeline diagram presented earlier in this article captures the introduction sequence of AWS Lambda, Google Functions, and Google Cloud Run. Google Autopilot now offers an almost identical service despite being positioned to the market more as a “managed” hosted service than a “serverless” cloud service with some additional controls over workload provisioning covered later in this article. Fargate is a “serverless” container service because the provider and not the customer manages the virtual machines forming the cluster nodes. AWS was also the first vendor to introduce a “serverless” option (known as AWS Fargate) that outsources the worker nodes in addition to the control plane. AWS launched Elastic Container Services (ECS) in 2015 (not based on Kubernetes) and Elastic Kubernetes Service (EKS) in 2018. The diagram below shows the introduction timeline of AWS and Google Cloud Engine (GCE) related services.ĪWS has been leading the market in container services despite Google creating the Kubernetes technology. Soon after Google donated the Kubernetes project to the Cloud Native Computing Foundation (CNCF) in 2014, Amazon Web Services (AWS) and Google (later joined by Microsoft Azure) began offering hosted container services that grew with time in functionality and convenience. This article explains the differences in hosted container services and compares the offerings from the top three public cloud service providers. In this scenario, the cloud provider manages all underlying Kubernetes cluster resources such as nodes and networks and related functionality such as high availability and autoscaling. When a service provider outsources both responsibilities, customers only provision the Kubernetes pods (the smallest deployable unit of Kubernetes comprising one or more containers) and pay for the pods’ usage. The control plane contains the orchestration logic while the work nodes host the application workloads. Public cloud providers let customers choose between outsourcing the Kubernetes control plane’s management or outsourcing the control plane plus the worker node’s administration. If you are not familiar with the Kubernetes building blocks, we recommend starting with our article devoted to explaining the Kubernetes architecture.Ī hosted Kubernetes service refers to a public cloud service offered by vendors such as Google, Amazon Web Services (AWS), and Microsoft Azure intended to avoid the upfront capital investment required to deploy a Kubernetes cluster in a data center and reduce part of its administrative overhead. This article requires some understanding of the Kubernetes inner workings.
0 Comments
Leave a Reply. |