1. Trang chủ
  2. » Luận Văn - Báo Cáo

Cluster api and declarative kubernetes management

25 1 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Cluster API and Declarative Kubernetes Management
Chuyên ngành Cloud Native Computing
Thể loại Book Chapter
Định dạng
Số trang 25
Dung lượng 612,95 KB

Nội dung

Although Kubernetes continues to be today''''s dominant container orchestration tool, running and managing Kubernetes is no easy task. Setting up and maintaining a Kubernetes cluster alone is complicated, and inconsistencies between various Kubernetes providers often compound the problem. This report introduces Cluster API, an open source CNCF project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. In this report, Spectro Cloud cofounders Tenry Fu and Saad Malik evaluate Cluster API''''s role as the foundation for holistic modern container management. IT operations and DevOps teams will explore the evolution of Kubernetes management approaches and learn how Cluster API can drive consistency and improve operational efficiency. In this report, you''''ll learn: The basics of Kubernetes architecture and how declarative management works What makes Kubernetes adoption challenging and how to approach those challenges How Cluster API brings declarative management to Kubernetes infrastructure The benefits and use cases of Cluster API (and a few limitations) What''''s coming with Cluster API and Kubernetes

Trang 2

Modern software architecture relies on cloud native, containerized, distributed applications oncloud, virtualized, bare metal, and even edge infrastructure Containerized applications useresources more efficiently, run in a broad variety of environments, and make it easy to scale upand down dynamically As applications scale and become more complex, automation andorchestration become ever more important

Kubernetes is the dominant container orchestration tool and continues to strengthen its hold inthe enterprise as companies deploy increasing numbers of clusters across a variety ofenvironments Kubernetes operationalizes the management of containerized applications,bringing consistency across applications and environments—once the cluster is set up Butsetting up and maintaining a Kubernetes cluster is itself a complicated proposition, especially atscale, and the challenges can differ from one environment to another

The Cloud Native Computing Foundation’s Kubernetes Cluster Lifecycle special interest group(SIG) created Cluster API to solve the complex problems of Kubernetes cluster lifecyclemanagement across environments Cluster API takes its cue from Kubernetes itself, providingdeclarative management (also known as “desired-state management”) capabilities via amanagement cluster that oversees the operation of worker clusters Cluster API controllersmanage Kubernetes infrastructure as objects in the Kubernetes API

As Kubernetes continues to dominate, organizations will have an increasing need to manage thegrowing complexity of larger numbers of deployments, often spanning multiple infrastructureenvironments The following chapters outline the challenges of managing Kubernetes and howCluster API can help

Trang 3

environments that include the dependencies and configuration files the services need to run.Containers are the building blocks of the cloud native approach, enabling scalable applications indiverse environments, including public, private, and hybrid clouds, as well as bare metal andedge locations.

Beyond the significant advantage of empowering application development teams to work inparallel on different services without having to update the entirety of an application, the cloudnative model offers a number of advantages over monolithic architecture from an infrastructureperspective Containerized applications use resources more efficiently than virtual machines(VMs), can run in a broader variety of environments, and can be scaled more easily Theseadvantages have driven wide adoption of microservice-based architecture, containers, and thepredominant container orchestration platform: Kubernetes

Kubernetes facilitates the management of these distributed applications, allowing you to scaledynamically both horizontally and vertically as needed Containers bring consistency ofmanagement to different applications, simplifying operational and lifecycle tasks Byorchestrating containers, Kubernetes can operationalize the management of applications across

an entire environment, controlling and balancing resource consumption, providing automaticfailover, and simplifying deployment

Although Kubernetes provides a foundation for resilient and flexible cloud native applicationdevelopment, it introduces its own complexities to the organization Running and managingKubernetes at scale is no easy task, and the difficulties are compounded by the inconsistenciesbetween different providers and environments

The control plane is the main access point that lets administrators and others manage thecluster The control plane also stores state and configuration data for the cluster, tells workernodes when to create and destroy containers, and routes traffic in the cluster

The control plane consists mainly of the following components:

API Server

The access point through which the control plane, worker agents(kubelets), and users communicate with the cluster

Controller manager

Trang 4

A service that manages the cluster using the API server

using controllers, which bring the state of the cluster in line with

Figure 1-1 shows the basic components of a Kubernetes cluster

For high availability, the control plane is often replicated by maintaining multiple copies of theessential services and data required to run the cluster (mainly the API server and etcd)

Trang 5

Figure 1-1 The components of a Kubernetes cluster

You manage every aspect of a Kubernetes cluster’s configuration declaratively (such asdeployments, pods, StatefulSets, PersistentVolumeClaim, etc.), meaning that you declare thedesired state of each component, leaving Kubernetes to ensure that reality matches yourspecification Kubernetes maintains a controller for each object type to bring the state of everyobject in the cluster in line with the declared state For example, if you declare a certain number

of pods, Kubernetes ensures that when a node fails, its pods are moved to a healthy node

Kubernetes Objects and Custom Resource Definitions

Trang 6

Kubernetes represents the cluster as objects You create an object declaratively by writing

a manifest file—a YAML document that describes the intended state of the object—andrunning a command to create the object from the file

A controller makes sure the object exists and matches the state declared in the manifest Acontroller is essentially a control loop, similar to a voltage regulator or thermostat, that knowshow to maintain the state of an object within specified parameters

A Kubernetes resource is an endpoint in the Kubernetes API that stores a certain type ofobject You can create a custom resource using a custom resource definition (CRD) torepresent a new kind of object In fact, some core Kubernetes resources now use CRDs becausethey make it easier to extend and update the capabilities of the objects

The Kubernetes Adoption Journey

Many organizations follow a well-traveled path in their adoption of Kubernetes, starting withexperimentation before they decide whether to rely on it The journey nearly always leads from asingle cluster to the complexity of managing many clusters in different environments Figure 1-

2 shows a typical journey to Kubernetes adoption, beginning with experimentation, moving toproductization, and finally to developing a managed platform

Figure 1-2 The journey to Kubernetes adoption

Experimenting with Kubernetes

In the experimentation phase, developers drive investigation into the capabilities of Kubernetes

by containerizing a few projects The open nature of Kubernetes makes it easy to manage on asmall scale, using the command-line interface and writing scripts to make changes to the clusterand to integrate other open source components At this stage, the organization often has not yetengaged with security, upgrades, availability, and other concerns that become important later Astheir needs change, the team makes configuration changes and integrates components gradually,often without documenting the evolution of the cluster When the time comes to support a

Trang 7

broader array of environments or expand access to more teams, it becomes apparent that thecluster is tailor-made for the small set of use cases it has been serving thus far.

Productizing Kubernetes for the Organization

As the organization begins to recognize the value of Kubernetes, teams begin to investigate how

to scale general-purpose clusters that can serve the needs of the entire company Departmentssuch as service reliability and IT operations begin looking for ways to make Kubernetes secure,supportable, and manageable These teams begin advocating for prescriptive, off-the-shelfsolutions that trade flexibility for reliability As the organization prepares to scale Kubernetes tofit its needs, it might find that these solutions are rigid and tend to silo each cluster, makingcross-team work more difficult

Developing Kubernetes as a Managed Platform

To make Kubernetes work for the business needs of the organization, it must be possible toeasily deploy, maintain, and scale clusters that are highly available and can handle applicationsand workloads that dynamically meet changing demands The goal of the organization musttherefore be to make Kubernetes operate as a modern platform at scale, providing resources tomultiple teams

At this point, the investigation often focuses on commercial Kubernetes platforms that can solvethe organization’s problems out of the box The team likely has enough experience to look forfeatures such as flexibility, repeatable deployment, and the ability to manage multiple clustersacross different environments, including diverse cloud platforms, virtualized or bare metal datacenters, and increasingly, edge locations IT operations and platform engineering teams willespecially be looking at securing cluster access, configuration management, and Day 2 concernslike scaling, upgrades, quota control, logging, monitoring, continuity, and others

As the platform takes shape and evolves, the tendency is to break larger multitenant clusters intosmaller special-purpose clusters This approach allows more flexible management, more efficientuse of resources, and a more tailored experience for the teams using the clusters within theorganization From a security standpoint, smaller clusters help make defense in depth policieseasier to implement, reducing the “blast radius” in the case of a breach Operating multipleclusters also makes it possible to deploy apps wherever needed, whether in the cloud or onpremises, or even at the edge

The Challenges of Kubernetes

Every environment deals with infrastructure differently, meaning different challenges for everytype of Kubernetes deployment Provisioning, upgrading, and maintaining the infrastructure tosupport the control plane can be difficult and time-consuming Once the cluster is up, integratingbasic components like storage, networking, and security presents significant hurdles Finally, in adeployment of multiple clusters across a variety of environments, each cluster must be managed

Trang 8

individually There is no native tool in Kubernetes itself for managing the clusters as a group,and differences between environments introduce operational nuances to each cluster.

Below the layer of Kubernetes are the resources that the cluster needs, mainly storage,networking, and the physical or virtual machines where the cluster runs This means that running

a cluster involves two separate lifecycles that must be managed concurrently: Kubernetes and thehardware and operating system supporting it Because Kubernetes is quickly evolving, these twolayers must be kept in sync Many problems with node or cluster availability come fromincompatibilities between the versions of the operating system and the Kubernetes components.These problems become exponentially more difficult in multicluster deployments because theproblem of keeping Kubernetes and the operating system in sync is multiplied by the number ofplatforms, each with its own complexities, then multiplied again by any specific versions orflavors required by the teams using the clusters

Initially, Kubernetes didn’t include any tools for managing the cluster infrastructure Bringing upmachines and installing Kubernetes were manual procedures Creating clusters outside ofmanaged Kubernetes environments required a lot of effort and custom tooling The Kubernetescommunity needed a way to provide common tools to bootstrap a cluster

To meet this challenge, the Kubernetes community began developing tools to simplifyprovisioning and maintaining clusters Here are some examples that were developed over the lastfew years:

Trang 9

declaratively using a cohesive API—the way that Kubernetes manages nodes, pods, andcontainers.

Cluster API is a project created by the Kubernetes Cluster Lifecycle special interest group (SIG)

to provide a consistent, modular platform for declarative Kubernetes cluster management.Leveraging kubeadm, Cluster API uses Kubernetes-style APIs to create, configure, and manageKubernetes clusters and their infrastructure for a variety of deployment environments andproviders

Goals of Cluster API

The Kubernetes Cluster Lifecycle SIG created Cluster API to make cluster lifecycle managementeasier Although Kubernetes itself has APIs for orchestrating containers regardless of theenvironment or provider, it doesn’t provide a consistent way to create new machines on arbitraryinfrastructure This means cluster lifecycle has to be handled uniquely depending on theenvironment

The primary charter of the Kubernetes Cluster Lifecycle SIG is to make it easier to create,manage, upgrade, and retire Kubernetes clusters The group decided to develop Cluster API as aframework for managing Kubernetes infrastructure across environments, with several goals inmind:

Declarative cluster lifecycle management

Trang 10

Cluster API’s declarative approach for managing Kubernetes cluster lifecycle makes it easy to integrate with GitOps, a declarative operations framework that applies DevOps application development practices to infrastructure automation.

Infrastructure abstraction

Cluster API provides a consistent way to provision and maintain cluster infrastructure across different environments, both in the cloud and on premises This means managing not just compute and storage but also networking and security, including implementing security best practices such

as subnets and bastion hosts.

Integration with existing components

Cluster API is designed to work with existing components that rely on kubeadm, cloud-init, and other tools to initialize a cluster rather than reinventing and reimplementing what already works Even the Cluster API approach to managing cluster infrastructure is familiar, as it’s designed to resemble the way developers manage workloads on Kubernetes.

The overall goal of Cluster API is to provide a centralized, consistent set of tools that make itpossible to manage multiple Kubernetes clusters in different environments without having toworry about the underlying infrastructure and without having to build large assortments ofcustom tools

Cluster API Concepts

Cluster API uses modular, interchangeable components as the basis for a complete clusterinfrastructure management platform that automates difficult cluster lifecycle management tasks

Trang 11

such as creating, scaling, repairing, and upgrading a cluster In essence, Cluster API is a modularabstraction layer that makes it possible to treat a variety of objects on different infrastructuresubstrates consistently The core components of Cluster API remain the same from oneenvironment to another, while the modular parts of Cluster API adapt to each environment:

 CRDs model the VMs, physical servers, and other cluster components

 Providers implement the correct capabilities and services for differentinfrastructure environments

Cluster API manages these resources declaratively, meaning that instead of specifying how tocreate and manage the infrastructure, you need only to define the desired state of the cluster.Instead of a set of commands, the code becomes a repeatable specification that you can reuse formultiple deployments

Figure 2-1 shows how providers implement the modular approach to Cluster API architecture,making it possible to tailor cluster lifecycle management to any infrastructure

Trang 12

Figure 2-1 Cluster API modular architecture

Custom Resource Definitions and Controllers

Just as Kubernetes provides abstractions for objects such as nodes, namespaces, and pods,Cluster API uses Kubernetes CRDs to represent the infrastructure and configuration that support

a Kubernetes cluster Each CRD is a declarative specification for a component of infrastructure.Cluster API introduces several CRDs for managing cluster infrastructure, including:

When you specify the characteristics of a cluster, control plane, or machine, you do so bycreating a YAML file called a manifest, which follows the schema defined in thecorresponding CRD For every parameter defined in the CRD, the YAML file provides a valuethat tells Cluster API how to create the custom resource

Ngày đăng: 16/07/2024, 16:37

w