Part 0 gave the general context for what we’re going to do in the next series of posts. Here we’ll look at the foundations underlying any program that wants to react to changes in a Kubernetes cluster (a Kubernetes controller).
To understand how to write a Kubernetes controller—in my case, in Java—we have to understand what the source of events and truth is. If (as noted in the first post in this series) Kubernetes can be seen as a distributed messaging system where the messages you react to are desired state specifications, then where do the “messages” come from?
Any time you kubectl create
or kubectl apply
or kubectl delete
something, you are publishing a message. The message, the Kubernetes resource you’re creating, applying or deleting, has three high-level parts to it: the kind, which says what kind of thing the message is about, the specification, which describes what you want to be true of the thing you’re creating or applying or deleting, and the status, which describes how things really are. Typically if you’re editing YAML and using kubectl
and are in other ways an ordinary end user, you are only concerned with the specification (since only Kubernetes itself can tell you how things really are).
So to whom are you publishing the resource specification message? The short answer is: the API server.
The API server is named this way because when you look at it through a certain lens, it is the place where Kubernetes concepts are exposed as HTTP endpoints, or APIs (in the modern web-centric usage of the term, not necessarily the programming language sense of the term). Hence it serves up APIs.
While this is strictly speaking true, it doesn’t necessarily add a whole lot of information to the conceptual landscape!
Let’s look at it a different way. At a slightly higher level, it is also a message broker. kubectl
POST
s or PUT
s or PATCH
es or DELETE
s a resource specification, and the API server makes that specification available at other endpoints it serves up, effectively “broadcasting” the “message” to any “listeners” who want to react to it. To “listen” for a “specification message”, you make use of Kubernetes watches.
(Somewhat surprisingly, the “watch” feature of the Kubernetes API is not documented on its own in the reference documentation. For example, you’ll find documentation about how to “do” a watch in the, say, CronJob
reference documentation (which actually also incorrectly recommends a deprecated way of doing it!), but you won’t find a section in the reference documentation telling you that you can “do” watches on every (plural) resource in the Kubernetes ecosystem. Well, you can; you Just Have To Know™ that it’s documented correctly in the API conventions).
When you do, you will get back a stream of WatchEvents describing Events happening to resources of the appropriate type. Or, if you like, a message channel that filters messages by kind! Kubernetes watches are built on top of etcd watches and inherit their conceptual properties. (For more on this feature, you may wish to see my Kubernetes Events Can Be Complicated post.)
So to write a message listener that reacts to incoming specification messages concerning resources of a particular kind, all we have to do is just implement an appropriate Kubernetes watch, right? Not quite. While watches are lighter than they have been in the past, they aren’t exactly lightweight, and, more critically, if you’re truly going to react to all specification changes over time, you may be jumping in to the middle of a logical event stream. So you’ll need to list what’s gone on first, then establish your watch from that point forward.
That’s a good jumping-off point for (finally) digging into the tools/cache
package, the Go-language-specific framework for writing Kubernetes controllers. Somewhere in this package something has to be listing all the resources of a given type—Pods, Deployments, whatever—and then setting up watches on them: the combination will give you a logical collection of objects and modifications to them.
Sure enough, if you dig around enough, you eventually end up looking at the listwatch.go
file. In there, you’ll find the ListerWatcher
type, declared like so:
// ListerWatcher is any object that knows how to perform an initial list and start a watch on a resource. type ListerWatcher interface { // List should return a list type object; the Items field will be extracted, and the // ResourceVersion field will be used to start the watch in the right place. List(options metav1.ListOptions) (runtime.Object, error) // Watch should begin a watch at the specified version. Watch(options metav1.ListOptions) (watch.Interface, error) }
We’d like to find an implementation of this type that is backed by a Kubernetes client or an HTTP client or something, so that the List()
function is actually attached to the Kubernetes API server. Oh, look, there is one:
// NewListWatchFromClient creates a new ListWatch from the specified client, resource, namespace and field selector. func NewListWatchFromClient(c Getter, resource string, namespace string, fieldSelector fields.Selector) *ListWatch { listFunc := func(options metav1.ListOptions) (runtime.Object, error) { options.FieldSelector = fieldSelector.String() return c.Get(). Namespace(namespace). Resource(resource). VersionedParams(&options, metav1.ParameterCodec). Do(). Get() } watchFunc := func(options metav1.ListOptions) (watch.Interface, error) { options.Watch = true options.FieldSelector = fieldSelector.String() return c.Get(). Namespace(namespace). Resource(resource). VersionedParams(&options, metav1.ParameterCodec). Watch() } return &ListWatch{ListFunc: listFunc, WatchFunc: watchFunc} }
(You can dig further in the source code to discover that references to c
are basically references to a Kubernetes client.)
So we have a little component that can list and watch things of a particular type. Note as well the fieldSelector
reference, which in messaging system terms allows us to filter the message stream by various criteria. Field selectors are quite primitive and mostly undocumented, unfortunately, but they’re better than nothing.
Assuming we have a ListerWatcher
backed by a Kubernetes client, we now have a logical collection of Kubernetes resources of a particular kind and a certain number of events concerning those resources. It would be handy to put these things somewhere. How about in a Store
?
type Store interface { Add(obj interface{}) error Update(obj interface{}) error Delete(obj interface{}) error List() []interface{} ListKeys() []string Get(obj interface{}) (item interface{}, exists bool, err error) GetByKey(key string) (item interface{}, exists bool, err error) // Replace will delete the contents of the store, using instead the // given list. Store takes ownership of the list, you should not reference // it after calling this function. Replace([]interface{}, string) error Resync() error }
Store
seems nice and generic (no parameters of any types other than Go primitives). It would be nice to either extend our ListerWatcher
to take in a Store
where it could dump its stuff. For various reasons, the tools/cache
package decided to create another concept called a Reflector
, which is simply the grouping together of a ListerWatcher
with a Store
together with the necessary plumbing to route the return value of the ListerWatcher
‘s List()
function “into” the Store
, as well as the necessary plumbing to turn incoming WatchEvent
s into additions, updates and removals of items in the Store
.
Rising up above the mud for a moment, think about why this concept is called a reflector. Looked at one way, it reflects the contents of a Kubernetes message channel into a cache. Then if you want to do something in your Kubernetes controller program with resource specifications and statuses of a particular kind, you can look to the cache rather than to the API server itself.
One important thing to notice is that this subsystem is standalone and self-contained: if you have a ListerWatcher
backed by a Kubernetes client, a Store
and a Reflector
that bundles them together, you have a component that can be useful in a number of scenarios down the line. More importantly, conceptually we can, when necessary, clear some mental space by simply referring to the Reflector
, understanding that effectively it is a cache of events we want to react to, and that the other machinery we’ve delved into above are its implementation details.
If we take advantage of concepts from object-oriented programming and a little bit of Java (after all, this series is about deriving a Java implementation of a Kubernetes controller framework) we can dispense with the ListerWatcher
concept and fold it into the Reflector
concept as an implementation detail. If we do this, our visual model of all of this might start to look like this:
Here, a KubernetesResourceReflector
is a (hypothetical) concrete implementation of Reflector
, set up for resources of a particular kind as represented by the T
parameter. Give it a KubernetesClient
and a Store
implementation at construction time and run()
it, and it will set about causing the Store
to contain a cached copy of Kubernetes resources of a particular kind (you and I know that under the covers some sort of mechanism resembling a ListerWatcher
will be used, but the mechanism by which the actual listing and watching occurs is not of interest to the end user, only that the cache is populated). We’ll greatly refine and refactor this model over time.
In the next post, we’ll look at some more concepts from the tools/cache
package. This is just the tip of the iceberg!
3 thoughts on “Understanding Kubernetes’ tools/cache package: part 1”
Comments are closed.