Let us grant first that any time you say something like “looked at one way” you’re about to emphasize one facet of a system and downplay or outright ignore others. Let us also grant that such an exercise is to dig into the part that’s emphasized through the lens of the emphasis. So then: onwards.
Looked at one way, Kubernetes is a distributed messaging system, whose message broker of sorts is the API server, and whose messages are specifications of desired state. When you use this overly simplistic lens,
kubectl and other clients can be seen as creators (broadcasters) of these specifications, and certain Kubernetes programs can be seen as listeners for and reactors to the creation, modification and deletion of these specifications, creating or starting or stopping or deleting other programs, and possibly creating, modifying or deleting other specification messages. One hopes that all these gyrations end up realizing the state demanded by the specifications.
(Pursuant to the first paragraph, there are also first-class events in Kubernetes. I’m not emphasizing those here.)
Suppose you wanted to write a program that could listen for these specification “messages” and do something as a reaction to them. CoreOS calls these kinds of programs “operators”; Kubernetes itself calls them “controllers”, or sometimes simply “control loops”. Conceptually, there’s nothing magic about them: listen for some conceptual events, react to them, possibly fire others.
Nevertheless, writing controllers (I like the Kubernetes term, not the CoreOS term) has taken on a certain kind of mystique. This is due in part to the fact that Kubernetes is a language-independent platform. This means if you want to be notified of these “specification events”, you have to arrange to effectively write a distributed messaging client yourself that ultimately hooks up to the right HTTP endpoints of the API server. This involves doing caching right, accounting for failures in the pinball machine, being efficient in terms of memory usage, deciding whether your messaging client is going to multiplex events to other downstream messaging clients, and so on.
Because this is all complicated, and because programmers (yours truly included) are lazy, people tend to want to use an implementation of such a “messaging client” that has already been built. To date, only one such (widely used) implementation has been built, and it is written in Go, and it is part of the Kubernetes codebase itself, and it is in my opinion phenomenally complicated, or—not mutually exclusively—perhaps I’m just thick-headed. Thankfully, there are some excellent blog posts and videos to help with all this.
So you could, if you wanted, simply follow the recipes outlined in those blog posts and videos, and write a Go program that uses their constructs appropriately, and off you go.
But if you’re like me, you want to fully understand how the whole pseudo-messaging-client-library-underlying-Kubernetes-controllers actually works. Also, in my case, as an incorrigible Java programmer, I want to reverse engineer the abstractions so I can express them better in Java, and hopefully derive a Java implementation of this pseudo-messaging-client-library-underlying-Kubernetes-controllers library.
Over the next few posts, we’ll head off on the journey of disentangling and hopefully demystifying this
tools/cache package, and hopefully at the end of the road will end up with an idiomatic Java way of writing controllers. We’ll start with the next post, which talks about some of the underpinnings.
11 thoughts on “Understanding Kubernetes’ tools/cache package: part 0”
Comments are closed.