1. Trang chủ
  2. » Công Nghệ Thông Tin

IT training extending kubernetes khotailieu

47 35 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 47
Dung lượng 1,58 MB

Nội dung

Extending Kubernetes Making Use of Work Queues, Reconciliation Loops, Controllers, and Operators Gianluca Arbezzano REPORT Extending Kubernetes Making Use of Work Queues, Reconciliation Loops, Controllers, and Operators Gianluca Arbezzano Beijing Boston Farnham Sebastopol Tokyo Extending Kubernetes by Gianluca Arbezzano Copyright © 2019 O’Reilly Media, Inc All rights reserved Printed in the United States of America Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 O’Reilly books may be purchased for educational, business, or sales promotional use Online editions are also available for most titles (http://oreilly.com) For more infor‐ mation, contact our corporate/institutional sales department: 800-998-9938 or cor‐ porate@oreilly.com Acquisitions Editor: John Devins Developmental Editor: Virginia Wilson Production Editor: Nan Barber Copyeditor: Octal Publishing, LLC Proofreader: Nan Barber Interior Designer: David Futato Cover Designer: Karen Montgomery Illustrator: Rebecca Demarest First Edition August 2019: Revision History for the First Edition 2019-07-31: First Release The O’Reilly logo is a registered trademark of O’Reilly Media, Inc Extending Kuber‐ netes, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc The views expressed in this work are those of the author, and not represent the publisher’s views While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, includ‐ ing without limitation responsibility for damages resulting from the use of or reli‐ ance on this work Use of the information and instructions contained in this work is at your own risk If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of oth‐ ers, it is your responsibility to ensure that your use thereof complies with such licen‐ ses and/or rights 978-1-492-05739-0 [LSI] Table of Contents Preface v Kubernetes Extensibility Kubernetes Is a Framework Unified User Interface for Operations Conclusion Client Side 11 kubectl Plug-Ins API and SDK Go Client Library Conclusion 11 13 17 20 Server-Side Extensions and Primitives 21 Informers Work Queues Custom Resource Definitions Pain Point Conclusion 21 26 29 36 37 iii Preface “Kubernetes is too complex” is a phrase we have all heard at least once, and it is partly true Kubernetes has a solid architecture because it is the foundation for deploying and managing the dis‐ tributed system life cycle The question I always raise is: “How you justify this complexity?” For example, are you writing and applying a YAML specification via the kubectl? Then you probably not need a lot of the features that make Kubernetes so complex Amazon Web Services (AWS) has the same problem A lot of devel‐ opers keep saying that it is an expensive service, but this is not true AWS hides a huge amount of complexity behind its bill and you need to have the right people capable of justifying the cost When I see how much control you have over your infrastructure with Ama‐ zon Virtual Private Cloud (Amazon VPC), subnets, internet gate‐ ways, and routers, I think the cost is justified This is even truer when I think about how I am building a private network via API and not physically buying and cabling it The APIs are the way you integrate the complexity of AWS into something that makes your product better Using the Kubernetes API to implement and automate flows or to have more visibility into what is happening in your system is the way I justify the cost of Kubernetes, and in this report I explain how I am doing it This report is not about getting started with Kubernetes; it won’t show you how to use kubectl or how to install it via kubeadm If you are looking for that kind of content, I recommend reading Kuber‐ netes: Up and Running (O’Reilly) as a great way to start your journey v with Kubernetes But if you are a software engineer with a good knowledge of how Kubernetes works, I’d like to teach you how to write small applications integrated with Kubernetes so that you can the following: • Automate node labeling based on tags from Amazon Elastic Compute Cloud (Amazon EC2) • Integrate external resources such as DNS records as a Kuber‐ netes Custom Resource Definition (CRD) • Integrate everything (in general) that has an API managed by Kubernetes This report guides you in how to use the Kubernetes API served by the API Server to build your integrations vi | Preface CHAPTER Kubernetes Extensibility This first chapter highlights why I think extending Kubernetes via its API is the right way to utilize all of its capabilities It gives you an idea of how you get to include your flow “for free” as part of the Kubernetes architecture—things like authentication and authoriza‐ tion, for example This chapter demonstrates that there is much more that you can beside writing YAML specifications to apply via the command line Kubernetes Is a Framework When I think about Kubernetes, I not see it as an end applica‐ tion For example, it does not include MySQL, Jaeger, or other devel‐ oper tools or databases To me, Kubernetes provides solid primitives that I can integrate into my application, such as the following: kubectl run nginx image=nginx port=80 This is the first command everyone runs when they are learning Kubernetes for the first time This action creates a new pod with an NGINX container image that exposes port 80 But what happens behind the scenes? Figure 1-1 illustrates this from a very high perspective kubectl is a very powerful and flexible client for interacting with Kubernetes Because it is complex, it cannot have just one piece of code As you can see, there are many blocks These are just the most important ones for this example, but even so, there are many of them Figure 1-1 A simplified Kubernetes architecture, with its more impor‐ tant services The API Server works like a gateway that handles the API request Based on the request, it reaches out to other components like etcd, kubelet, and the container runtime interface Let’s take a look at them very quickly: • etcd is a database developed by CoreOS and is not part of the Cloud Native Computing Foundation (CNCF) It is used to store all the stateful information required by Kubernetes • kubelet is in the node scope It runs on every server or virtual machine (VM), and it saves actions where it is running • kubelet doesn’t “create a container”; that is not within its scope To execute actions on a container, it communicates with what we call the Container Runtime Interface (CRI) Docker is the most popular of them If you are wondering how many components comprise Kubernetes, the answer is, a lot! And this is why it is complex The goal for Kubernetes is to expose a layer of abstraction to deploy and manage the application life cycle All of the components you see on this page are more or less easy to swap out to take advantage of a concrete ser‐ vice provided by your infrastructure or software provider Today, all cloud providers offer a Kubernetes-managed service, but each of them runs it in a different way and with different integra‐ tions For example, Kubernetes 1.13 CoreDNS is the default DNS service, but if you are running on AWS, you can rely on Route53 This means that you won’t manage the reliability of CoreDNS if you are | Chapter 1: Kubernetes Extensibility happy to use a managed version of it And I suspect that Amazon Elastic Container Service for Kubernetes (Amazon EKS) uses Route53 This is why, in such a short amount of time, all of the cloud provid‐ ers delivered a “stable enough” Kubernetes service They were able to replace core Kubernetes components with those they developed or ran You can the same In the beginning, Docker was almost the unique container runtime engine, but now there are alternatives, and as soon as they implement the CRI, you can use whichever you think is better: CRI-O or containerd-CRI, for example As I mentioned, this report focuses on the Kubernetes API—the one provided by the API Server—but there are APIs in all of the layers of Kubernetes Unified User Interface for Operations I am a software engineer, but I now work in the cloud and automa‐ tion area and have done so for a couple of years Let me tell you why I love what I My goal is to make my colleagues happy and com‐ fortable when interacting with their environments—building tools that make delivery, life cycle management, and observability friendly That’s why I am always against random Bash scripts in some reposi‐ tory I like to provide a good feeling to my teammates, which I call UX for Operations In practice, this means using a unified CLI or web UI across the board to interact with the huge number of APIs provided by the third-party services that, these days, we all use in operation Kubernetes helps with that because, as a framework, it offers corre‐ lated features and tools that you can include for free with a wise integration of your flows and resources Some of them are covered in this chapter; others are covered in later chapters Here’s a list of them: • kubectl • SDK • Authentication Unified User Interface for Operations | Work Queues In the previous example, building a Shared Informer, we are patch‐ ing the node, but we not care about what else is happening within the Kubernetes cluster It is possible to have other resources or routines in action on the same node Perhaps it is terminated in the meantime or it is temporarily not available to be patched This is a problem not only in Kubernetes, but in every distributed system in which concurrency exists To address this, the Kubernetes client-go employs a concept called workqueue As the name itself suggests, it is a data structure that behaves as a queue Your action won’t execute when the Shared Informer runs your function; rather, it executes later from a queue The best example currently available is inside the k8s.io/client-go itself Let’s analyze the most important part of that code from here First, we need to create a queue and we need to defer the ShutDown function: // import “k8s.io/client-go/util/workqueue” queue = workqueue.NewRateLimitingQueue( workqueue.DefaultControllerRateLimiter()) defer queue.ShutDown() go processQueue(stopper) There are different kinds of queues available from the client-go project such as RateLimintingQueue, DelayedQueue, or just a queue without any retry logic I always use the RateLimitingQueue because it is the most complete one; you can requeue messages even with a delay The next step is to add to the queue all of the events I get from a SharedInfomer other than executing the actual code right away, this is why I replaced the login onUpdate, onAdd, onDelete for the pod Shared Informer: informer.AddEventHandler(cache.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { var key string var err error if key, err = cache.MetaNamespaceKeyFunc(obj); err != nil { runtime.HandleError(err) return } queue.Add(key) }, 26 | Chapter 3: Server-Side Extensions and Primitives UpdateFunc: func(oldObj, newObj interface{}) { var key string var err error if key, err = cache.MetaNamespaceKeyFunc(newObj); err != nil { runtime.HandleError(err) return } queue.Add(key) }, DeleteFunc: func(obj interface{}) { var key string var err error if key, err = cache.MetaNamespaceKeyFunc(obj); err != nil { runtime.HandleError(err) return } queue.Add(key) }, }) Here, I am not doing anything more than adding a key to the queue with the function queue.Add The key is generated using cache.Met aNamespaceKeyFunc, a utility function that serializes the object in a way that it can be deserialized later From the serialized object, you will be able to get information such as namespace, resource name, how many retries for that message, and so on The function go processQueue(stopper) runs in the background and is the handler that processes the queue: func processQueue(stopper chan struct{}) { for { // We use an anonymous function here to defer queue // Done function In this way we won't forget about the // Done function that needs to be always called otherwise // we won't notify the queue that a message got processed func() { var key string var ok bool obj, shutdown := queue.Get() if shutdown { return } defer queue.Done(obj) if key, ok = obj.(string); !ok { queue.Forget(obj) runtime.HandleError(fmt.Errorf("key is not a string %#v", obj)) return } namespace, name, err := cache.SplitMetaNamespaceKey(key) if err != nil { queue.Forget(key) runtime.HandleError(fmt.Errorf("Impossible to split key: %s", key)) return } logger.With(zap.Int("queue_len", queue.Len())) With(zap.Int("num_retry", queue.NumRequeues(key))) Work Queues | 27 Info(fmt.Sprintf("received key %s/%s", namespace, name)) // you need to implement this function syncHandler := func() error { return nil } if err := syncHandler(); err != nil { // if it is a temporary error you can requeue the message queue.AddRateLimited(key) return } // Everything is done right We can purge the // message from the queue queue.Forget(key) }() } } Every message runs within an anonymous function because it is cru‐ cial to run the function queue.Done(key) to notify that we pro‐ cessed a message It needs to be called regardless of whether the message succeeded To remove a message from the queue, we can use the function queue.Forget(key), if the message needs to be retried because of a temporary failure, call queue.AddRateLimi ted(key) To figure out whether a message is new or was requeued, use the function queue.NumRequeues(key), which returns the num‐ ber of times the message was processed The scope of this section is to help you to understand how to man‐ age a Work Queue and to picture the message life cycle This is why I didn’t implement the function syncHandler := func() error { return nil } You need to replace it with your business logic In my example, the message returns an error to notify the main func‐ tion when the message needs to be retried The function namespace, name, err := cache.SplitMetaNamespa ceKey(key) splits the message’s key to its namespace and name From this point, you can use the client-go, as explained in Chap‐ ter 2, to get the resource from Kubernetes If it does not exist, it means that it was deleted; otherwise, you need to take the appropri‐ ate action A complete example of a Shared Informer/Controller is available on GitHub Shared Informers and Work Queues are primitives provided by client-go and some other clients, as well The application you write using this tool are usually called Controllers 28 | Chapter 3: Server-Side Extensions and Primitives Custom Resource Definitions In Kubernetes a Customer Resource Definition (CRD) is a firstcitizen resource just as pods, Services, Deployments, and Ingress It is so powerful because it helps you to bring to Kubernetes workflows that are not usual to the orchestrator Here are just a few examples to show you how powerful and flexible CRDs are: • You can create a CRD called jenkins-jobs that uses the Jenkins API to show you how a Jenkins build is going, directly from the kubectl • You can write a CRD to manage a stateful workload and appli‐ cation that needs a particular workflow not supported by a roll‐ ing update or StatefulSet • You can create a new CRD to manage users directly from kubectl in your identity provider such as Auth0 or Google Account If you are wondering why you should use them, I suggest that you refer to Chapter I think there is a need for UX for Operations, and if as a company you are migrating to Kubernetes, you are teaching all of your engineers how to use kubectl It is a good reason to use Kubernetes as a centralized place to look when you need to use operations A CRD is just a resource; it means that you can register a new CRD via YAML specification: apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: todos.todoexample.com spec: group: todoexample.com version: v1 names: kind: Todo plural: todos scope: Namespaced With this example, I have created a new to-do list CRD Every item within the list has two fields: message Specifies that the to-do is about “buy a new book.” Custom Resource Definitions | 29 when This is the deadline The idea is to be able to populate and manage this to-do list from kubectl itself When you apply the previous specification to your cluster, you can to run a new set of kubectl commands: kubectl describe crd todos.todoexample.com kubectl get todo You can apply the following YAML specification to insert a new item in the to-do list: apiVersion: todoexample.com/v1 kind: Todo metadata: name: buy-book spec: when: "2019-05-13T21:02:21Z" message: "Remember to buy a book about cloud on Amazon." You can now retrieve the list of to-dos, and you will see the new buy-book item: $ kubectl get todo You can add more, delete, and edit them as you with a native Kubernetes resource This is cool! But I understand your concerns The CRD is just stored on etcd, there is nothing really useful happening here From Kuber‐ netes 1.14, there is a new feature in beta to manage validation that uses the OpenAPI v3 schema to specify validators for a specified fields We can update the CRD we just set, adding the validation definition, so that the new version will accept a to-do only when the message is longer than five characters: apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: todos.todoexample.com spec: group: todoexample.com version: v1 names: kind: Todo plural: todos scope: Namespaced validation: openAPIV3Schema: properties: 30 | Chapter 3: Server-Side Extensions and Primitives spec: properties: message: type: string minLength: Let’s apply this new CRD, which now prohibits a to-do shorter than five characters Otherwise, you get an error message like this one: The Todo "buy-book" is invalid: []: Invalid value: map[string] interface {}{"apiVersion":"todoexample.com/v1", "kind":"Todo", "metadata":map[string] interface {}{"annotations":map[string] interface {}{"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"todoexample.com/v1\",\"kind\":\"Todo\",\"metadata\": {\"annotations\":{},\"name\":\"buy-book\",\"namespace\":\"default\"},\ "spec\":{\"message\":\"Rem\",\"when\":\"2019-05-13T21:02:21Z\"}}\n"}, "creationTimestamp":"2019-05-28T08:55:44Z","generation":1, "name": "buy-book", "namespace":"default", "uid":"60d12ac9-8126-11e9-95e80242ac110002"}, "spec":map[string]interface {}{"message":"Rem", "when": "2019-05-13T21:02:21Z"}}: validation failure list: spec.message in body should be at least chars long How Shared Informers and CRD Play Together You should now have a pretty good grasp of Shared Informers and Work Queues, and you should know what to next! The to-do CRD emits events like all of the other resources, meaning that we can generate a bunch of code and set up a Shared Informer to take action based on what happens to our CRD This use case is very simple, but I created a repository on GitHub with the generated code and a main function with the Shared Informer to animate the to-do CRD As you can see, there is a directory called artifacts that contains the two YAML specification files that we just applied to register our CRD and to create our first item There is a third one, artifacts/ app.yaml, to deploy a couple of resources: • A service account, cluster role binding, and a cluster role to authorize the application to watch, list, update the to-do items • A deployment with the actual applications that contain the Shared Informer If you use minikube, by default it starts with Role-Based Access Control (RBAC), so you need a service account, cluster role, and the binding If your cluster doesn’t have RBAC enabled, you just need the Deployment Custom Resource Definitions | 31 After the deploy, you will have a new pod within the default Kuber‐ netes namespace You should follow its logs just to understand what is going on If you create a new to-do item from the log, you should see a new log line with the key of the queue Now you have the code hooked to the to-do item, and you can implement your workflow This is a very simple example, just to give you a taste of how power‐ ful Kubernetes is when you begin combining CRDs, Shared Inform‐ ers, and Work Queues to build your applications The code is open source and available, but I would like to highlight some of the important parts in order to leave you free to extend or to write your own project First, I used kubernetes/code-generator to generate the new types, the client, and Shared Informer for the to-do item You need to install/clone this repository in your GOPATH I have also placed the directory of my project in my GOPATH ~/go/src/github.com/ gianarb/todo-crd Code-generator starts from a set of go files and it generates all the rest Here are the files and the layout of the directory that you need to have: ├── ├── ├── ├── go.mod go.sum LICENSE pkg ├── apis └── todoexample └── v1 ├── doc.go ├── register.go └── types.go The folder structure is important: todoexample is the name of the group of your CRD and v1 is the version, which we declared within the CRD YAML: apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: todos.todoexample.com spec: group: todoexample.com version: v1 There are three files to create: doc.go, register.go, types.go 32 | Chapter 3: Server-Side Extensions and Primitives // +k8s:deepcopy-gen=package // +k8s:defaulter-gen=TypeMeta // +groupName=todoexample.com package v1 Description: doc.go doc.go contains information such as groupName and other specifica‐ tions that will be read by the code-generator: package v1 import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) // +genclient // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // Todo is a top-level type type Todo struct { metav1.TypeMeta `json:",inline"` // +optional metav1.ObjectMeta `json:"metadata,omitempty"` // This is where you can define // your own custom spec Spec TodoSpec `json:"spec,omitempty"` } // custom spec type TodoSpec struct { Message string `json:"message,omitempty"` When v1.Time `json:"when,omitempty"` } // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // no client needed for list as it's been created in above type TodoList struct { metav1.TypeMeta `json:",inline"` // +optional metav1.ListMeta `son:"metadata,omitempty"` Items []Todo `json:"items"` } Description: types.go The file types.go contains the list of Go objects used by the CRD Recall that our to-do object supports two fields: message and when We also need to declare the list of items, not only the single one: package v1 import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/schema" ) Custom Resource Definitions | 33 // Define your schema name and the version var SchemeGroupVersion = schema.GroupVersion{ Group: "todoexample.com", Version: "v1", } var ( SchemeBuilder runtime.SchemeBuilder localSchemeBuilder = &SchemeBuilder AddToScheme = localSchemeBuilder.AddToScheme ) func init() { // We only register manually written functions here The // registration of the generated functions takes place in the // generated files The separation makes the code compile even // when the generated files are missing localSchemeBuilder.Register(addKnownTypes) } // Resource takes an unqualified resource and returns a Group // qualified GroupResource func Resource(resource string) schema.GroupResource { return SchemeGroupVersion.WithResource(resource).GroupResource() } // Adds the list of known types to the given scheme func addKnownTypes(scheme *runtime.Scheme) error { scheme.AddKnownTypes( SchemeGroupVersion, &Todo{}, &TodoList{}, ) scheme.AddKnownTypes( SchemeGroupVersion, &metav1.Status{}, ) metav1.AddToGroupVersion( scheme, SchemeGroupVersion, ) return nil } Description: register.go The last file is register.go It is very important because it acts as the glue between the types and the CRD itself If you set up everything correctly with this command, you should be able to generate the other files via the script generate-group.sh within the code-generator project: ~/go/src/k8s.io/code-generator/generate-groups.sh all \ github.com/gianarb/todo-crd/pkg/client \ github.com/gianarb/todo-crd/pkg/apis todoexample:v1 34 | Chapter 3: Server-Side Extensions and Primitives The command accepts four parameters: • What to generate (all) • Where to put the generated code (github.com/gianarb/todocrd/pkg/client) • Where the defined files are to generate the new code from (git hub.com/gianarb/todo-crd/pkg/apis) • The last parameter is what group and version you are currently generating, in our case, todoexample:v1 The command generates several files, which you can see in the / client directory in the repository From here, we can begin to create the main function In the repository, it is in cmd/main.go The unique important difference in the code compared with what I show you in the Shared Informer section is the Kubernetes client It comes from the client-go/kubernetes repository: // import "k8s.io/client-go/kubernetes" clientset, err := kubernetes.NewForConfig(config) factory := informers.NewSharedInformerFactory(clientset, 0) informer := factory.Core().V1().Nodes().Informer() In our case, we cannot use that one, because our todo type is not a native Kubernetes resource; our client was generated by the codegenerator and it is the Go package github.com/gianarb/todocrd/pkg/client/clientset/versioned: // import clientset "github.com/gianarb/todo-crd/pkg/client/ // clientset/versioned" // import todoInformers "github.com/gianarb/todo-crd/pkg/client/ // informers/externalversions" todoClient, err := clientset.NewForConfig(config) if err != nil { logger.With(zap.Error(err)).Fatal("Error building clientset") } factory := todoInformers.NewSharedInformerFactory(todoClient, time.Second*30) todoInformer := factory.Todoexample().V1().Todos().Informer() todoInformer.AddEventHandler( ) The remainder doesn’t really change that much compared with what we saw throughout this report Custom Resource Definitions | 35 Pain Point I need to alert you to some of the painful and unexpected situations that you will encounter when building code that interacts with Kubernetes, as I showed you in this report The Kubernetes ecosystem is huge and the number of repositories that you will need to import might scare you Since client-go v12.0.0, Kubernetes moves to go mod, which means that your appli‐ cation can stay out from your GOPATH But as we saw in the previ‐ ous section, the code-generator still relies on the GOPATH, and it doesn’t work very well outside In general, all of the repositories are moving to go mod, but there are so many of them that it is still a work in progress For the purposes of this report (but I the same when I develop), I am inside my GOPATH, but I use go mod to manage dependencies I always export GO111MODULE=on The best documentation that you will find is the code itself One of the reasons why I was excited to write this report is because it gave me the chance to share what I learned from months of struggling and deep dives in various open source repositories to develop my own integration You should the same, GitHub is full of reposito‐ ries with code, some of them are linked within this report, others are listed from here with a description to drive you on what they CoreOS created the method that we use today to call applications that use all of the primitives I shared with you in this report: CRD and Shared Informers to man‐ age the application life cycle in Kubernetes They are called operators The prometheus-operator developed by CoreOS and its community implements the suggested way to deploy and manage a Prometheus life cycle in a Kubernetes cluster Here are the main features pro‐ vided by the operator: • Prometheus and Alertmanager setup and configuration • It serves a set of CRD, such as: — Prometheus and Alertmanager are two CRDs You can spec‐ ify more than one of them, and they will run inside the same Kubernetes cluster in order to have the ability to partition them by namespaces or teams 36 | Chapter 3: Server-Side Extensions and Primitives — ServiceMonitor is a CRD designed to manage via kubectl Prometheus scrape configuration in order to specify what services need to be monitored by which Prometheus — PrometheusRule is a CRD to specify alerts or recording rules The jaeger-operator developed by the Jaeger community helps you to manage the life cycle of a Jaeger Tracing infrastructure You can deploy one or more of them specifying different deploy strategy: • AllInOne to deploy Jaeger as a single container It doesn’t scale very well, but it is the easier one to try • Production deploys Jaeger splitting collector, UI and query workloads to different containers In this way you can scale them independently • Streaming is another strategy that works in production but puts an additional layer between Jaeger and the backend (Cassandra, ElasticSearch) in order to decrease the pressure on the storage Other than the deploy strategy, Jaeger offers tools to select which end storage to use, to manage secrets, and for sampling ElasticSearch as well offers its own operator with a Custom Resource Definition that allows you to deploy multiple ElasticSearch clusters with advanced configuration such as tls encryption, zones, replicas and snapshots RedHat sponsors the OperatorHub, where you can find and down‐ load operators The way I try to figure out the solution to my problem is to think about which project in the Kubernetes ecosystem does something similar to what I would like to Then I clone it and start to dig into it The Kubernetes ecosystem of libraries and developers is so big that digging is the best way to learn new things Luckily for us, people share what they do, and we can all learn from each other’s pain Conclusion This is the end of this short report, but I hope it is just the beginning of your career writing code to bring your workflow within Conclusion | 37 Kubernetes You now have the basics and primitives that you need to build everything you need in a solid and Kubernetes-like way Having a simple user experience for your operations as part of Kubernetes sounds complex, but when you write your first CRD and Shared Informer, you will see how comfortable it is I hope this report will make these journeys even easier This approach has its side effects The obvious one is that your code will be tied to Kubernetes I not think this is a real concern When I build a REST JSON API, I learn how to structure the code with the appropriate abstraction in order to keep the business logic far from its JSON representation, because it is just one of the views I can provide You should follow the same best practices when you’re designing your integration with Kubernetes In particular, keep your logic far from the Kubernetes integration and far from the Shared Informer using separate modules, packages, or libraries This way, if you find yourself using solutions other than Kubernetes, you will still be able to use your business logic outside of Kubernetes itself 38 | Chapter 3: Server-Side Extensions and Primitives About the Author Gianluca Arbezzano is an SRE at InfluxData He is an Open Source contributor and maintainer for several projects including OpenTrac‐ ing, Docker, and InfluxDB He is also a Docker Captain and a CNCF Ambassador He is passionate about troubleshooting applications at scale, observability, and distributed systems He is familiar with sev‐ eral programming languages (such as JavaScript and Golang) and is an active speaker and writer, sharing his experiences and knowledge on projects that he is contributing to There’s much more where this came from Experience books, videos, live online training courses, and more from O’Reilly and our 200+ partners—all in one place ©2019 O’Reilly Media, Inc O’Reilly is a registered trademark of O’Reilly Media, Inc | 175 Learn more at oreilly.com/online-learning ... Preface CHAPTER Kubernetes Extensibility This first chapter highlights why I think extending Kubernetes via its API is the right way to utilize all of its capabilities It gives you an idea of how you... Services, and if it detects a par‐ ticular annotation, it creates a DNS record to one of the supported providers such as AWS Route53, CoreDNS, CloudFlare, DigitalO‐ cean It also manages its life cycle:... with Kubernetes from within your program In Chapter we use advanced primitives such as Shared Informer and Custom Resource Definition Based on the language and the cli‐ ent you use, some primitives

Ngày đăng: 12/11/2019, 22:19

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...

TÀI LIỆU LIÊN QUAN