A CDI Primer: Part 1

In the previous article, we dabbled briefly in looking at the surface of the iceberg that is the CDI machine for making @Inject work as a way of answering some basic questions about what, exactly, CDI does and why you might want to use it.

We also, much more importantly, looked into the dependency injection mindset.  We talked about abstract notions of producers and consumers and wiring them together.  We introduced the idea that if you look through the right lenses, fields, methods and constructors can all be producers housed in particular producer classes.  And finally we pointed out that producers can be consumers and vice versa.

Now let’s talk about lifecycles.

Object Lifecycles and “Producer Proxies”

When a producer of any kind “wants” to make or supply or acquire an object, two decisions have to be made:

  1. how to make or supply or acquire the object
  2. when to make or supply or acquire the object regardless of how that happens

Some producers, as we saw in the previous article, are, as often found in the trenches of enterprise Java development, what I’ll call singleton suppliers.  They acquire the (mostly!) One True Instance™ of the object in question somehow, sometimes creating it once if necessary, sometimes by interrogating some ancient snarling hairball of a legacy system, and then from that time forward in the whole life of the application, that’s the object you get if you ask them to supply it to you.

Other producers, like constructors, create a new object each time one is to be produced because they can’t do anything else, or they’re kind of uninterested in the ways that they might be called, so they shrug and say, hey, if you want a new object, call me; otherwise, don’t; I don’t store or cache nothin’.

What’s interesting about producers of any kind, in isolation, without any other technologies such as CDI or its subsystems in play, is: producers in isolation combine how an object is produced with when it is produced.

For example, what I’ll call singleton suppliers—those all-too-common “enterprisey” methods that somehow acquire a Singleton From Elsewhere™ and then return it when asked—are combining the mechanics of how the singleton is produced (maybe it’s looked up from some other system) with when it is produced (maybe this system lookup only happens once and then the result is stored as a static singleton).

Or constructors: if you acquire an object from a constructor—a kind of producer, remember—then no matter what you do and no matter what it does you’ll get a new instance of the produced object each time because that’s what constructors do.

What would be nice is to let producers do what they do—acquire or make things when called for—and have there be some other subsystem that controls when a produced object is handed to a consumer.  Something that can go “back to the well”—the producer “well”—when needed, but not when not needed, and is in full control of when that happens, but does not itself know how the manufactured items it is storing or caching are made.  Something that is kind of like a proxy for producers: something that can stand in front of them and hand out their results when appropriate, regardless of how they were made or from what system they were acquired, according to its own notions of lifecycle.

ProducerConsumerUseCase

In such a situation a producer can focus on how to acquire an object but not on how to cache it; a producer proxy can focus on when to cache an object, if at all, and when to clear the cache; a consumer, in grand dependency injection mindset fashion, can remain blissfully ignorant of both of these things.

Also in grand dependency injection mindset fashion, we want this producer proxy (my term, incidentally, not CDI’s) to not look up or acquire or otherwise scrounge around for its relevant producer(s).  We want it to simply declare somehow that it needs one, and then punt the problem of getting one to its caller (we’ll talk about who that might be in a moment).

CDI implements this producer proxy concept with a construct called a Context and it is the first CDI construct we’ll look at in depth.

Contexts…

The first thing to know about a Context is that as a CDI end user you’ll interact with it many, many times—and as a consumer you’ll never see it or know that’s what you’re doing.

A Context, in other words, is part of the internal CDI plumbing by which automatic wiring between producers and consumers is implemented.

A Context is the nexus where a consumer needing an object of a certain kind is wired in an abstract fashion to a producer that is capable of producing objects of that kind, and where the lifecycle of such a produced object is managed.

Inside the depths of CDI, when you mark a field or a parameter with @Inject, CDI asks a particular Context for the kind of object you want.  That Context, in turn, ends up asking a producer, when necessary, to make or acquire the kind of object that should go in your @Inject-annotated slot.

From the standpoint of a Context, a producer is represented by something kind of odd called a Contextual.  A Contextual is simply a producer that can also destroy the things it makes.  A Contextual can make any number of different things of a given type (so, for example, a Contextual whose type parameter is Object could make Gorp, Chocolate, PeanutButter or whatever).  Most of the time, though, a given Contextual makes one kind of thing.

Finally, a Contextual should not have, as its core concern, or any concern if at all possible, how long an object should live—it is a “pure” producer: it just makes ’em, ma’am, it doesn’t hang onto ’em.

Here is what the contents of the Contextual interface look like, in their entirety, and for now we can ignore the second of these two methods:

public T create(CreationalContext<T> creationalContext);
public void destroy(T instance, CreationalContext<T> creationalContext);

I hope you can see how simple that is.

Contextual is another one of those interfaces that is in the internals of CDI.  You rarely, if ever, implement it directly.  But you could.  And you do in some cases, usually indirectly, as we’ll see much later.

For example, ignoring CreationalContext—a subject for a later post—you could see that you might implement the create method in such a way that it wraps a constructor invocation (you’d be implementing a producer that is constructor-based).  Or you could see that you could implement the create method in such a way that it wraps a method invocation (you’d be implementing a producer that is method-based), though consuming other dependencies in this case might be a little trickier.  Or a field access.  And in many cases your destroy implementation might not have to do anything.

The takeaway here is that you can represent all producers as Contextuals of a particular kind.

Context, then, uses Contextuals to produce the objects it will then manage the lifecycle of.  Here are the (relevant at the moment) contents of this interface, and trust me when I tell you that for now we can ignore the second method:

public <T> T get(Contextual<T> contextual, CreationalContext<T> creationalContext);
public <T> T get(Contextual<T> contextual);

view raw
ContextContents.java
hosted with ❤ by GitHub

Again, very simple.

The first method is the real workhorse.  It takes in a Contextual and a CreationalContext, as you can see.  Once again, we’ll ignore the CreationalContext.  (The second method is for certain cases where CDI wants to ensure that no creation at all happens: it just wants the Context to supply a cached instance if one exists; it doesn’t want the Context to make a new one.  We’ll ignore it for the sake of mental clarity.)

The first method’s contract is to acquire and return a T.  You’ll note that Context itself is not a parameterized type (it doesn’t have any angle brackets or type parameters in its name).  So you can see here that a Context can get a whole variety of different kinds of objects.

You can also probably squint and see that if I tell you (as I have) that a Context‘s responsibility is to manage the lifecycle of objects, not actually make them, then when it decides that it needs a new one it can (and should) just call the create method on the Contextual that has just been handed to it.  The Context knows “when”; the Contextual knows “how”.

If it already has an existing instance and has determined that this incoming demand can be satisfied by the existing instance, well, then it can just hand it back.

That’s all very convenient.  So a Context as a consumer remains blissfully ignorant of what kind of Contextual (producer) has been handed to it, and can use this raw material if it wants in order to fulfill its contract of supplying T instances.  And the Contextual can remain blissfully ignorant of lifecycle and caching concerns and can just return Its Thing™, whatever that might be, when asked to make it.

…and Dependency Injection

CDI stands for Contexts and Dependency Injection.  And now with the introduction of Contexts we have an incomplete and blurry view—but a view nonetheless—of how the major parts work together, and why “Contexts” is important enough to be listed in the title of the specification:

  • A consumer of some kind (your business-critical class) gets wired in some currently opaque-to-us way (involving @Inject) to a Context implementation that supplies it with the dependency it needs (via the Context#get(Contextual, CreationalContext) method).
  • The Context implementation gets wired in some opaque way to a Contextual implementation that knows how, but not necessarily when, to make those kinds of things.
  • A Contextual is a kind of producer but everybody but the Contextual implementation itself doesn’t know what kind.

The result is that CDI is now conceptually capable of injecting dependencies, taking into account desired lifecycles, and letting producers and consumers and producer proxies all focus on what they do best, and on nothing else.

Contextual Instances

I’ve spoken very loosely about producers making things of a particular kind.  And I’ve spoken equally loosely about consumers needing things of a particular kind.  And I’ve spoken just as loosely about the wiring process where a producer of a thing of a particular kind gets wired in some fashion to a consumer that needs those things, mediated by a producer proxy—a neutral term I invented to describe the lifecycle/lifespan-managing component that decouples when something is handed out from how it is made or acquired.

Now we’ve also learned that producers in CDI are represented by Contextuals and are effectively “fronted” by Contexts, and that producer proxies are represented by Contexts, and that Contexts therefore are the sources of instances that producers—Contextuals—make.  Consequently when a Context‘s get method hands you (indirectly) an object, that object is known throughout CDI’s literature as a contextual instance.  It is an instance of something that a given Context manages.

Still speaking loosely, when you ask for a Gorp to be supplied to you (injected) by CDI by using the @Inject annotation, you’re going to get a contextual instance of Gorp in the annotated “slot”.

Or, equivalently: if an object “comes out of” a Context, then it is a contextual instance, and its existence must be owed to the fact that CDI invoked get on a Context and the Context ultimately invoked the create method of a Contextual that produced it.  CDI “knows about” all contextual instances.  CDI does not know about (for the most part) objects that are not contextual instances.

In the next post, we’ll look at the mechanics of how CDI wires Contextual implementations that can make certain contextual instances of particular kinds to Context implementations that implement a lifecycle, and how CDI wires consumers of contextual instances to Context instances that can provide them.

A CDI Primer: Part 0

So many tutorials about CDI have been written I’m a little nervous putting down my own.  But here we are.

I believe that most of the CDI articles and posts I’ve read over my many, many years of programming don’t follow the right road for developers just starting out with CDI.  As a result, people think of CDI as magic.  Because its terminology is also unusual, misconceptions get built upon at the earliest stages of learning and you end up with developers sticking beans.xml files everywhere in a prayer to the gods to get their stuff to somehow work.

I also want to try to get the high level concepts in place first before diving deeper.  As a result, I’m going to speak a bit fast and loose here to start with.  My hope is that over time these foundations will demystify the great CDI machine.

Lastly, I’m primarily focusing on CDI alone, i.e. not on any aspect of its integrations with technologies like Java EE.

Let’s dive in.  We’ll start with a section that deals with CDI and @Inject, since that’s how most people first encounter CDI, but then we’ll move rapidly on to the dependency injection mindset, which is much more important.

CDI Makes @Inject Work

For anyone wondering “what is CDI? Why would I use it?” the quickest answer that makes immediate sense is:

CDI is the magic machine that lets you use @javax.inject.Inject.

OK, so why would I use @Inject?  What does it do for me?

Field Injection

Let’s say you have an instance field in your class, and you want it magically set to something, regardless of whether it’s public, protected, package-protected or private.

Do this:

@Inject // magic!
private Gorp myGorp;

CDI makes that work.

That is, some Gorp instance (we’ll talk about which one, and where it came from, in a bit) will get put there by CDI.  Cool!

Think a bit about this: you didn’t have to go hunting around for a Gorp instance yourself.  You didn’t have to look up what kind of Gorp to make from a properties file or JNDI or ServiceLocator or anything like that.  You asked for a Gorp, and a Gorp was delivered unto you from outer space.  Nice.  Simple.  Less of your code; therefore fewer bugs that will be assigned to you.  A smaller mental model: fewer second- or third-order concepts in your head means more clarity in programming the important stuff you need to do.

Injection is the overall term for this kind of magic: something comes along (CDI in this case) and injects a Gorp into, in this case, an instance field of your (business-critical, of course) class.  The thing that is being injected is a dependency: it’s something you need for some reason, so therefore whether you like it or not you depend on it.

So when you get a dependency injected into something, you have dependency injection.

We’ll call this particular case of injection field injection.

Where else can you use @Inject?

Method Parameter Injection

When CDI is in the world, you can also use @Inject on methods.  It’s the same kind of thing.  If you do this:

@Inject
private void consume(final Gorp gorp) {
  // nom nom
}

…the gorp “slot” will be filled with a Gorp instance (as before, we’ll talk about which one, and where it came from, and when in a bit) by CDI.  This is often referred to as setter injection, or setter method injection, or parameter injection, and you might think your method therefore has to be named something like setGorp, but it can be named anything.  The important part is that @Inject “goes on” the method.

Methods with @Inject “on” them can have as many parameters as you want.  CDI will try to “fill” all of them.  So if you do this:

@Inject
private void consume(final Gorp gorp, final Cheese cheese) {
  // nom nom nom nom
}

…CDI will “fill” the gorp “slot” with a Gorp instance, and the cheese “slot” with a Cheese instance.

Constructor Parameter Injection

CDI also makes @Inject on constructors work.  Constructors can be public, protected, package-private or private.  Just as with methods, @Inject-annotated constructors’ parameter “slots” will be “filled” by CDI.  So if you do this:

@Inject
private Chocolate(final PeanutButter peanutButter) {
  super();
}

…the peanutButter “slot” will be filled by CDI with a PeanutButter instance.

Making @Inject Work Is a Kind of Dependency Injection

This style of programming is lumped under the term dependency injection.  The idea is that you, the programmer, never look up or make or seek or acquire what you need (your dependencies).  Instead you “take in” what you need.

You force your caller to hand you your raw materials instead of hounding off and scavenging them yourself.  Always be lazy!

So if you’re writing a Chocolate, and you need a PeanutButter, you do not do this:

// Here is an example of a class that is *not* using
// dependency injection.  It is brittle and hard to test.
public class Chocolate {

  private final PeanutButter peanutButter;

  // Note that the constructor doesn't "take in" anything.
  // Yet the peanutButter field still needs to be "filled".
  // I wonder how that will happen?
  public Chocolate() {
    super();
    // You've seen this kind of thing in your job before.
    // There's the magic "look up the thing" method.
    // My experience has been that LDAP is usually involved. 😃
    this.peanutButter = lookupPeanutButter();
  }

  // Here is the magic "look up the thing" method.
  private PeanutButter lookupPeanutButter() {
    // Almost always there's LDAP or a singleton being used. 😞
    final GroceryStore singleton = GroceryStore.instance();
    // When this lookup breaks, was it because the GroceryStore
    // couldn't be found? or the PeanutButter? You will see
    // anti-patterns like this throughout enterprise Java
    // programming.
    return (PeanutButter) singleton.get("Peanut Butter");
  }

}

In the class above, really what you need is a PeanutButter.  It doesn’t really matter where it came from, just that you “take it in”.  So whether or not you’re ever going to use CDI or Spring or anything else, please write your class like this instead:

public class Chocolate {

  private final PeanutButter peanutButter;

  public Chocolate(final PeanutButter peanutButter) {
    super();
    this.peanutButter = peanutButter;
  }

}

What’s nice is that your class has now punted the problem of how to acquire the PeanutButter to the caller.  Now the caller has to figure that out.  Your Chocolate class is much simpler and easier to deal with.  As someone experiencing the glories of your world-changing code for the first time, I can understand it better and marvel more at your brilliance.

If, on top of this well-designed class, you now put @Inject in the right place, then CDI’s magic can supply—inject—the PeanutButter.  That is, CDI becomes the caller:

public class Chocolate {

  private final PeanutButter peanutButter;

  // Let's use constructor parameter injection
  @Inject
  public Chocolate(final PeanutButter peanutButter) {
    super();
    this.peanutButter = peanutButter;
  }

}

The Dependency Injection Mindset

Now kindly forget CDI for a moment (and Spring if you’re coming from that background).

What is really important here is the mindset.

That mindset is: in any class you’re writing, take in only what you need to get your job done.  Pare it down more and more and more until you truly have the object you need.  Always be lazy; make your caller do the work of finding your dependencies!

Try if you can to pass them in in the constructor of your class and set them in private final instance fields.  Immutability is good!

The more you relentlessly and recursively pursue this mindset, the simpler your classes will be and the more clear the dependencies between them will be.

The fact that some of them might be “injected” by some kind of magic machine like CDI is utterly immaterial to your class design.  The dependency injection mindset is much more about the “dependency” part and much less about the “injection” part.

If there’s one thing you take away from this series of articles, let it be that programming using the dependency injection mindset—even if you never use a dependency injection framework like CDI (or Spring, or Jersey’s HK2)—is its own goodness.

Consumption

Within the dependency injection mindset, there are consumers and producers.

Consumers are like the Chocolate class above: they “take in” stuff, do their work, and are done.  A good consumer declares dependencies on exactly what it needs and doesn’t worry about how that stuff gets made or supplied or looked up or acquired.  In CDI, using @Inject at some location indicates that you’re doing consumer work there.

Consumption is the heart of the dependency injection mindset.

Production

Up to this point we’ve been focused on not worrying about where, for example, instances of PeanutButter come from or who makes them or how many little subassemblies go into ultimately manufacturing one.  All of these concerns are part of production.

For this article, a producer is something whose primary job is “handing out” an instance of something.  Sometimes “handing out” means “creating”, and sometimes it means acquiring or looking up.

Sometimes a producer “knows” it should hand out the same thing over and over again.  If you’ve programmed in Java for any length of time, you’ve run into someone somewhere who uses singletons and methods to hand them out.  These methods that return singletons are doing production work—they’re producers.

If you squint right, and look at things a certain way, another kind of producer that hands out the same thing over and over again is a field!  So looked at under the right lights, a field can be a producer.

Other times a producer “knows” it should hand out a new thing whenever called for.  Again, if you’ve been in the Java enterprise world for a while you’ve seen various Factory-suffixed classes.  Typically they have methods that start with create or make or get and they create or make something (often times a thing whose class is named the same thing as the factory class, minus the Factory suffix) and return it whenever they’re invoked.  These methods too are producers.

These kind of method-based producers obviously live “inside” classes (they’re methods, after all).  The things they make may or may not be instances of the classes the producers live inside.  For example, a PeanutButterFactory class may contain an acquirePeanutButter method that returns a PeanutButter instance.  Note that the class that houses the PeanutButter producer—PeanutButterFactory—in this case is not the same as the class of the return type of the method—PeanutButter.

Just so we can talk about things later let’s call the “housing” of a producer (whether it’s a field or a method) a producer class.  As we’ve seen above, in general—but with one notable exception described immediately below—the class of the thing being made (the field’s type, or the method’s return type) need not be the same as the producer class housing the producer (the field or the method) that makes the thing.

Finally, there are producers all over the place in plain sight that you may not think of.  They create new things every time when called for.  They’re called constructors!  Constructors have an interesting property, which is that they must be housed in the class that is the class of the instances they make.  So, for example, a PeanutButter constructor makes PeanutButter instances and can’t make anything else.

Another way to put this is that in the case of constructors, the producer class’ producer (the constructor) makes instances of the producer class itself.

So fields, methods and constructors are all producers if you look at them as sources of objects that someone might need.

Production and Consumption

Oftentimes, a method or constructor that is making or getting things to hand to a caller will need other raw materials to get the job done.  Maybe, as in our examples earlier, a producer of PeanutButter instances needs to get them from a GroceryStore.

Following the dependency injection mindset, there’s nothing to prevent a producer from also being a consumer!  That is, a method that for various business-related reasons returns or creates a PeanutButter instance from a GroceryStory shouldn’t look up a GroceryStore instance, or set about some other means of creating a GroceryStore, it should simply declare that it needs one.

Here’s what a PeanutButter-returning producer (method) might look like when it’s designed from within the dependency injection mindset.  Note that there’s no GroceryStore acquisition going on (no singletons, no LDAP, no database servers, no configuration subsystem):

public PeanutButter acquirePeanutButter(final GroceryStore store) {
  return store.get("Peanut butter");
}

Note that this method is a producer when we’re looking at it as a source of PeanutButter instances, and a consumer when we’re looking at it as something that needs a GroceryStore to do its job.

Wiring It Up

Suppose now we’ve written several classes using our dependency injection mindset.

So in our left hand we have consumers that need things to do their job.  In our right hand we have producers that make things should they ever be needed by someone (and may also consume things as part of that production process).

We can tell just by surveying the landscape that that method over there that returns PeanutButter instances—a PeanutButter producer—”goes with” this consumer over here, Chocolate, whose constructor needs PeanutButter instances.

Again, just by looking at things, we can see that if we wanted to ever instantiate a Chocolate for any reason, we’re first going to need a PeanutButter.

To get a PeanutButter, we’re going to have to call that PeanutButter-returning method.

To call that PeanutButter-returning method, we’re going to have to create an instance of the class it “lives” in, then…oops, we’re going to need a GroceryStore because—remember? see the examples above—the PeanutButter-returning method needs a GroceryStore to do its job.  So we’ll need to recursively go through this effort with GroceryStore—maybe it in turn is produced by a producer, or maybe we can just call its constructor.

This process of figuring out these relationships and instantiating the right things in the right order in order to come up with other things is known generically and colloquially as wiring.  You can do it by hand.  It’s not magic.

Wiring By Hand

For example, at the initialization of our program somewhere we could do something like this (this example deliberately has a few problems):

final GroceryStore groceryStore = new GroceryStore();
groceryStore.put("Peanut butter", new AdamsChunky());
final PeanutButterFactory factory = new PeanutButterFactory();
final PeanutButter peanutButter = factory.acquirePeanutButter(groceryStore);
final Chocolate c = new Chocolate(peanutButter);

This is an example of (deliberately slightly bad, but not awful) wiring by hand, but it shows wiring nonetheless.

We’ve made many choices here.  Some are obvious; some, when generalized into your enterprise project of choice beyond this stupid example—swap in your favorite system you love to hate in place of GroceryStore, for example—are perhaps not so easy to see:

  • We’ve chosen the kind of GroceryStore.
  • We’ve chosen the kind of PeanutButter.
  • We’ve explicitly said that our PeanutButter instance, regardless of what choice we made a line above, is going to come out of a GroceryStore.
  • We’ve also said that PeanutButter instances can be acquired from a factory method.  (Hmm; can that method be subclassed?  Are there now two “sources of truth” or more for PeanutButter instances?)
  • We’ve implied that PeanutButter instances are (effectively) singletons.  Maybe we didn’t mean to do this.  Maybe this matters; maybe not.

So we’ve deferred certain choices—a Chocolate takes in a PeanutButter, but isn’t choosy about what kind, or where it came from; a GroceryStore presumably allows you to put any kind of PeanutButter you like; we’ve abstracted the production of PeanutButter behind a black-box method (acquirePeanutButter)—which is nice.  But it is important to note that we’ve wired in many other hardcoded choices above.

Many enterprise projects will have something like this, and a well-meaning developer will say, ah, hardcoding is bad; let’s allow someone to configure the kind of PeanutButter to use so it isn’t always AdamsChunky.  So they introduce a configuration file or mechanism that looks up the kind of PeanutButter to use—and now we’re out of the dependency injection mindset.  Oops!

That is: there will now be code in this initialization sequence that requires a certain configuration mechanism to look up the precise type of PeanutButter required.  Then someone will come along and try to abstract that configuration mechanism.  These are instances of slightly buried service locator patterns applied intentionally and unintentionally in the initialization code—and then suddenly it turns out that for reasons no one is entirely sure of you have to have an LDAP server or database running in order to test a Chocolate instance.  Ugh!  What happened to our “just declare the thing you need”?

And where does CDI come in?

Automatic Wiring

CDI is that well-meaning developer, and the author of the initialization code above, but way better: it does the wiring correctly, automatically and well without interfering with the dependency injection mindset.  As you can probably see by now, that initialization code is the process of matching producers with consumers, and that is exactly what (this area of) CDI does.  It has been doing this work correctly and well for many, many years.

Armed with this foundation (the dependency injection mindset, seeing the world in terms of producers and consumers and a middle-player that wires them together properly and efficiently), we can move on in the next post to how CDI does this.

Maven Specifications and Environments: Part 2

In the previous post, we looked at the concept of environments—implementations of specifications (a concept discussed two posts ago).

When you have a Maven project that you want to be runtime-independent, but coded to a particular JCP specification, things get a little hairy. Your code must compile against an API jar (or API jars, or, if you’ve read this series of posts, a specification), but you want it to run at test time against a particular environment, and you want that environment to dictate which API jars should be used.

This sounds like a contradiction until we realize that Maven respects the order in which <dependency> elements are listed.

The TL;DR here is: when you are putting together your <dependency> elements, list them in your pom.xml in typically reverse order: list test-scoped dependencies first, then runtime-scoped dependencies, then provided-scoped dependencies and finally compile-scoped dependencies.

Why?

When you compile, none of the test-scoped dependencies will be “in scope”, so they will simply be ignored. Next on the classpath will be your provided-scoped dependencies followed by your compile-scoped dependencies. At compile time, none of this really matters: where your provided-scoped dependencies appear on your classpath is typically irrelevant.

But when you test, the order is suddenly quite important. If your test-scoped dependencies come first, then any transitive API jars that they pull in (for example) will be the ones that are actually used on your classpath—they’ll “come before” any provided-scoped API jars you have in order to support runtime environment-independent compilation. Those provided-scoped dependencies will still be on your classpath at test time, but they’ll never be referenced: the runtime environment you’re testing against should (must!) dictate its own API jars to be used, so those are the ones you’ll end up using.

To make this slightly more concrete, let’s say you have a CDI project and you want to be able to run it on either Weld or OpenWebBeans. Let’s say further that you’re going to run unit tests using Weld. Now recall that Weld uses the JBoss-authored version of javax.interceptor packages reified in the org.jboss.spec.javax.interceptor:jboss-interceptors-api_1.2_spec:jar artifact, not the javax.interceptor:javax.interceptor-api:jar artifact. To preserve runtime environment independence, you want to compile (let’s say) against the javax.interceptor:javax.interceptor-api:jar artifact, but when you test with Weld you want Weld to use whatever API jars it wants, not the runtime-environment-independent ones you happen to have selected and compiled against.

To do this, you’ll add Weld first in test scope, then javax.interceptor:javax.interceptor-api:jar in provided scope. When you compile, you’ll use javax.interceptor:javax.interceptor-api:jar; when you test you’ll end up transitively using javax.interceptor:javax.interceptor-api:jar.

When you expand this strategy to include environments and specifications as defined earlier in this series of posts, component-oriented development becomes much simpler.

Maven Specifications and Environments: Part 1

In the previous post, we took a look at specifications: Maven pom.xml files that represent pom-typed artifacts that house dependencies in compile scope, but are themselves depended upon in provided scope.

Specifications give you the artifacts you need to compile your code. But (if built properly) they don’t come with implementations. This lets your code be coupled to, say, the CDI specification without being coupled to, say, Weld (instead of, say, OpenWebBeans).

I’ve settled on the term environment to describe another Maven pattern I use to back a particular specification with the jars needed to implement it. The challenge here is that a given environment is—like the specification it implements—comprised of many jars. To reduce boilerplate, we want to find a good way to declare our runtime dependence on various jars that implement the classes and interfaces defined by the API jars our code relies upon during compilation.

Another challenge is that strictly speaking an environment often correctly and legally brings in its own implementations of API jars. For example, Weld does not depend on javax.interceptor:javax.interceptor-api:jar (which houses the javax.interceptor packages that reify the interceptors specification), but on org.jboss.spec.javax.interceptor:jboss-interceptors-api_1.2_spec:jar. This jar file also houses the javax.interceptor packages that reify the interceptors specification. This plurality of API jars reifying the same (English) specification is entirely legal, but you should only have one API jar at runtime, and the runtime (Weld, OpenWebBeans) should pick it (org.jboss.spec.javax.interceptor:jboss-interceptors-api_1.2_spec:jar, javax.interceptor:javax.interceptor-api:jar). That is, at runtime you want the runtime environment to dictate which API jars are actually on the classpath, not your own Maven pom.xml file.

The idea behind an environment as represented by a pom.xml file is basically the same as that of a specification as represented by a pom.xml file. You define your pom.xml to have a packaging type of pom, and you list your elements, but this time you put them in runtime scope. Then you depend on this new environment by depending on it in runtime scope. The net effect is that its runtime-scoped dependencies become your project’s transitive dependencies in runtime scope.

Note that an environment defined like this doesn’t (necessarily) depend on a specification as we defined it in the previous post. An environment always supplies what it needs, and code designed to run in any environment of a particular kind is compiled against a specification as defined in the prior post.

Environments can be composed, and can be abstract or concrete. Here’s an example of one of the environments I use in my microBean projects. It is abstract, in the sense that it sketches out a modular runtime but lacks a CDI implementation. It does, however, specify a javax.validation implementation (Hibernate) and a Java Expression Language implementation (Glassfish):

<groupId>org.microbean</groupId>
<artifactId>microbean-abstract-environment</artifactId>
<version>0.5.3-SNAPSHOT</version>
<packaging>pom</packaging>
<dependencyManagement>
<dependencies>
<!– Normal dependencies. –>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>${project.artifactId}</artifactId>
<version>${project.version}</version>
<type>pom</type>
</dependency>
<dependency>
<groupId>com.fasterxml</groupId>
<artifactId>classmate</artifactId>
<version>1.4.0</version>
</dependency>
<dependency>
<groupId>org.glassfish</groupId>
<artifactId>javax.el</artifactId>
<type>jar</type>
<version>3.0.1-b10</version>
</dependency>
<dependency>
<groupId>org.hibernate.validator</groupId>
<artifactId>hibernate-validator-cdi</artifactId>
<version>6.0.13.Final</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>org.jboss.logging</groupId>
<artifactId>jboss-logging</artifactId>
<version>3.3.2.Final</version>
</dependency>
<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-configuration</artifactId>
<version>0.4.4</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-configuration-api</artifactId>
<version>0.4.4</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-configuration-cdi</artifactId>
<version>0.4.5</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-main</artifactId>
<version>7</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.8.0-beta2</version>
<type>jar</type>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<!– Runtime-scoped dependencies. –>
<dependency>
<groupId>org.glassfish</groupId>
<artifactId>javax.el</artifactId>
<type>jar</type>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.hibernate.validator</groupId>
<artifactId>hibernate-validator-cdi</artifactId>
<type>jar</type>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-configuration</artifactId>
<type>jar</type>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-configuration-api</artifactId>
<type>jar</type>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-configuration-cdi</artifactId>
<type>jar</type>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-main</artifactId>
<type>jar</type>
<scope>runtime</scope>
</dependency>
</dependencies>
view raw 03.pom.xml hosted with ❤ by GitHub

Here is another example of an environment that I use in my microBean projects that uses the abstract environment above:

<groupId>org.microbean</groupId>
<artifactId>microbean-weld-se-environment</artifactId>
<version>0.5.4-SNAPSHOT</version>
<packaging>pom</packaging>
<dependencyManagement>
<dependencies>
<!– Imports. –>
<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-abstract-environment</artifactId>
<version>0.5.3-SNAPSHOT</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.jboss.weld</groupId>
<artifactId>weld-core-bom</artifactId>
<version>3.0.5.Final</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!– Normal dependencies. –>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>${project.artifactId}</artifactId>
<version>${project.version}</version>
<type>pom</type>
</dependency>
<dependency>
<groupId>org.jboss</groupId>
<artifactId>jandex</artifactId>
<version>2.0.5.Final</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-jdk14</artifactId>
<version>1.8.0-beta2</version>
<type>jar</type>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<!– Runtime-scoped dependencies. –>
<dependency>
<groupId>org.jboss</groupId>
<artifactId>jandex</artifactId>
<type>jar</type>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.jboss.weld.se</groupId>
<artifactId>weld-se-core</artifactId>
<type>jar</type>
<scope>runtime</scope>
<exclusions>
<exclusion>
<groupId>org.jboss.spec.javax.el</groupId>
<artifactId>jboss-el-api_3.0_spec</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-abstract-environment</artifactId>
<type>pom</type>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-jdk14</artifactId>
<type>jar</type>
<scope>runtime</scope>
</dependency>
</dependencies>
view raw 04.pom.xml hosted with ❤ by GitHub

Line 85 is where we pull in the abstract environment. Note that it is in runtime scope. That means we pull in everything that environment defines, as well as whatever else is listed here. The end result is the contents of that abstract environment, plus the Jandex runtime, a SLF4J logging binding and Weld itself. At line 76, you can see that because the abstract environment has already pulled in a runtime implementation of the Java Expression Language, it by definition has supplied its own EL API jar, so we want to make sure that that is the API jar in use for that specification, so we exclude JBoss’ EL API jar here.

Once this environment has been defined, then we just have to use it. Let’s say we want to build a program that will run in this environment. In our project’s pom.xml, we would simply do this:

<dependency>
<groupId>org.microbean</groupId>
<artifactId>microbean-weld-se-environment</artifactId>
<version>0.5.4-SNAPSHOT</version>
<type>pom</type>
<scope>runtime</scope> <!– or test –>
</dependency>
view raw 05.pom.xml hosted with ❤ by GitHub

The last remaining hurdle is testing. Even when compiling code against a specification, you often want to test it in one of possibly many environments. When you do this, you want to make sure you’re testing against the environment and its dependencies, and not the provided-scoped specification that you’re compiling against. (Often the differences are merely academic, but not always.) We’ll look at this in the next post.

Maven Specifications and Environments: Part 0

I’d like to blog about two concepts that I’ve finally figured out how to express in Maven.

The first is that of a specification.  By specification, I mean loosely a collection of versioned artifacts that you code to, but without relying on any particular underlying implementation of those versioned artifacts.

Consider the CDI specification—the online document.  It is expressed in code form in terms of Java packages like javax.enterprise.inject, javax.enterprise.event, and so on.  These packages, in turn, and their classes, are reified in so-called API jars such as javax.enterprise:cdi-api:2.0:jar.  While an API jar is not itself the specification (different vendors may supply their own definitionally functionally equivalent reifications of the specification document) nor an implementation of it, it is one of possibly several reifications of the APIs described by the specification, and so when we’re talking about specifications in terms of Java code, it’s useful to just treat the API jars (from any given vendor) as the specification itself.

Note as well that the CDI specification (specifically) is actually a directed acyclic graph of API jars.  For example, the javax.enterprise:cdi-api:2.0:jar artifact depends on javax.el:el-api:3.0.0:jar, javax.inject:javax.inject:1:jar and javax.interceptors:interceptor-api:1.2:jar.  (It also has to depend on a version of the javax.annotation:javax.annotation-api:jar artifact, but omits this requirement for some reason, as does the specification document.)

So broadly speaking these jars comprise one possible reification of the CDI specification.  (They are not its implementation.)

These jars are also, of course, on their own useful only to prevent compilation errors.  Without a backing implementation, such as Weld, they’re otherwise useless.

Now, if you are a Maven user and you depend on javax.enterprise:cdi-api:2.0:jar in compile scope, then you will also depend on its dependencies in compile scope.  That also means perhaps less obviously that you’ll drag these jars with you into any runtime environment you might find your code in, even if that runtime environment happens to already have those jars.  In fact, in many cases, these jars—or functionally definitionally equivalent ones supplied by another vendor—will already be present, because the runtime that implements CDI 2.0 will include them.  So (as you probably know) you want to depend on javax.enterprise:cdi-api:2.0:jar in provided scope.  This is a means of instructing Maven that javax.enterprise:cdi-api:2.0:jar is provided by some implementation of the specification it represents.

But a dependency in provided scope does not also drag in its transitive dependencies!  So if you do this, you’ll miss the rest of the jars that comprise the CDI specification.  So you’ll also want to add explicit dependencies on javax.annotation:javax.annotation-api:1.3:jarjavax.el:el-api:3.0.0:jar, javax.inject:javax.inject:1:jar and javax.interceptors:interceptor-api:1.2:jar.

So this might look like this:

<dependencies>
<dependency>
<groupId>javax.annotation</groupId>
<artifactId>javax.annotation-api</artifactId>
<version>1.3</version>
<type>jar</type>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>javax.el</groupId>
<artifactId>javax.el-api</artifactId>
<version>3.0.1-b04</version>
<type>jar</type>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>javax.enterprise</groupId>
<artifactId>cdi-api</artifactId>
<version>2.0</version>
<type>jar</type>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>javax.inject</groupId>
<artifactId>javax.inject</artifactId>
<version>1</version>
<type>jar</type>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>javax.interceptor</groupId>
<artifactId>javax.interceptor-api</artifactId>
<version>1.2</version>
<type>jar</type>
<scope>provided</scope>
</dependency>
</dependencies>

view raw
00.pom.xml
hosted with ❤ by GitHub

That is a lot of boilerplate to remember each time you want, in practical terms, to depend on the CDI specification without coupling yourself to, say, Weld (one of several possible CDI-compliant runtimes).  Can we reduce this boilerplate?

We can.  A neat little trick of Maven is that your pom.xml can depend on another pom.xml in whatever scope you like.  Let’s see how this helps us out.

First, though, a slight digression.  Many of you may be familiar with import scope.  That’s not what I’m talking about here.  Maven defines a weird scope called import that (a) isn’t really a scope, (b) can be used only from within a  stanza, and (c) is applicable only to artifacts of type pom.  Briefly, if you add a  element in your pom.xml‘s  stanza that references an artifact of type pom with a scope of import, then that pom.xml‘s  stanza’s contents are effectively copied by value into your pom.xml‘s  stanza in place of the import-scoped  itself.  So what you’re really “importing” is the  stanza of the target pom.xml and nothing else.  You can read more about import scope in the official Maven documentation.  But remember the shorthand takeaway: it’s basically a textual templating mechanism, not an actual scope.

Whether or not you use import scope in your  stanza, you can depend on artifacts not just of type jar or war or zip and so on, but also on artifacts of type pom.  When you do this, the artifact of type pom in question becomes a direct dependency of your project, and any dependencies it declares become transitive dependencies of your project.  In terms of dependency graphs, this is no different from what happens when you depend on, say, JUnit (a jar artifact) and it drags in Hamcrest (a jar artifact): Hamcrest becomes a transitive dependency of your project.

This is a powerful tool for constructing specifications.  If we create a pom.xml whose stanza contains API jars and whose element is of type pom, then if we depend on this newly created artifact of type pom, we’ll get all those API jars as transitive dependencies. Here’s an example, showing only the important bits of such a pom.xml:

<groupId>com.foobar</groupId>
<artifactId>cdi-specification-pom</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>pom</packaging>
<dependencies>
<dependency>
<groupId>javax.annotation</groupId>
<artifactId>javax.annotation-api</artifactId>
<version>1.3</version>
<type>jar</type>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>javax.el</groupId>
<artifactId>javax.el-api</artifactId>
<version>3.0.1-b04</version>
<type>jar</type>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>javax.enterprise</groupId>
<artifactId>cdi-api</artifactId>
<version>2.0</version>
<type>jar</type>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>javax.inject</groupId>
<artifactId>javax.inject</artifactId>
<version>1</version>
<type>jar</type>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>javax.interceptor</groupId>
<artifactId>javax.interceptor-api</artifactId>
<version>1.2</version>
<type>jar</type>
<scope>compile</scope>
</dependency>
</dependencies>

view raw
01.pom.xml
hosted with ❤ by GitHub

One thing you’ll notice is that each is declared to be in compile scope. I could have omitted this line (in which case compile is assumed to be the default) but I wanted to make it explicit. If I had used provided scope, then if you depended on this pom-type artifact its dependencies—the very jar files you’re interested in—would not appear in your project, since provided-scoped dependencies are not transitive!

Instead, we declare them as compile-scoped dependencies, and then you can depend on this pom-type artifact in provided scope! The net result is that all of its dependencies—the jars you’re interested in—will show up in your project as transitive provided-scoped dependencies: exactly what we want! We replace twenty-odd lines of boilerplate with about four.

So let’s refine our definition of what a specification is (for the purposes of this discussion): it’s a pom.xml:

  • whose packaging type is pom
  • whose dependency elements are in compile scope
  • whose dependencies are “API jars” as sketchily defined above and their dependencies

Here’s a partial example showing what I mean. Suppose the com.foobar:cdi-specification-pom:pom artifact is essentially the pom.xml listed above. Then if you wanted to use it as a specification, you could simply do:

<dependencies>
<dependency>
<groupId>com.foobar</groupId>
<artifactId>cdi-specification-pom</artifactId>
<version>0.0.1-SNAPSHOT</version>
<type>pom</type>
<scope>provided</scope>
</dependency>
</dependencies>

view raw
02.pom.xml
hosted with ❤ by GitHub

Now you’ll get javax.el:el-api:3.0.0:jar, javax.inject:javax.inject:1:jar, javax.interceptors:interceptor-api:1.2:jar and javax.annotation:javax.annotation-api:jar as transitive provided-scoped dependencies.

In the next post, I’ll cover the idea of environments—runtime implementations of these specifications also expressed as pom.xml files.

Understanding Kubernetes’ tools/cache package: part 11—Towards a Kubernetes controller CDI 2.0 portable extension

In part 10, we laid out what we need to do to make a simple programming model available to an end user who wants to be notified of Kubernetes events in the same style as Go programmers using the tools/cache package.  This series starts with part 0.

Here we’ll look at implementing a CDI 2.0 portable extension that enables this lean programming model.

From part 10, we sketched out the idea of pairing an observer method with a producer method, using the same qualifiers on both the producer method’s return type and on the observer method’s event type.  Linked in this way, you now have the ability to build boilerplate machinery in between the Listable-and-VersionWatchable made by the producer method and the event observed by the observer method, and the end user doesn’t have to know about or invoke any of it: her events “just work” assuming this extension is present, opaquely, at runtime.

This post will make use of the microbean-kubernetes-controller framework, the Java Kubernetes controller framework built up over the course of this series (you can start at the beginning if you like).  This framework is CDI-independent.  We’ll adapt it to CDI 2.0 so end users don’t have to deal with any boilerplate.

As discussed, our portable extension will have to:

  • find beans that have Listable and VersionWatchable among their bean types
  • find observer methods observing Event objects that contain Kubernetes resources in them
  • ensure that for any given observer method whose event parameter is qualified with a set of qualifier annotations, there exists a producer method (or any other kind of CDI bean, actually) with the right kind of type that is also qualified by those same qualifier annotations (or no event delivery will take place)
  • have some way of recognizing that arbitrary qualifier annotations defined by the user are those that should be used to identify Kubernetes events—i.e. some way of meta-qualifying a qualifier annotation that appears on relevant producer methods and relevant event parameters

We can do all these things thanks to the well-defined lifecycle mandated by CDI.

First, we’ll sketch out a meta-qualifier annotation named KubernetesEventSelector.  The intent of this meta-annotation will be that if you put it on a qualifier annotation that you create, then your qualifier annotation will be treated as a qualifier that is trying to say something specifically about sets of Kubernetes events.

Here’s what that might look like:

import java.lang.annotation.Documented;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
import javax.inject.Qualifier;
@Documented
@Retention(value = RetentionPolicy.RUNTIME)
@Target({ ElementType.ANNOTATION_TYPE })
public @interface KubernetesEventSelector {
}

view raw
part11-6.java
hosted with ❤ by GitHub

Now as an end user if I define my own qualifier annotation and annotate it with KubernetesEventSelector:

@Documented
@KubernetesEventSelector
@Qualifier
@Retention(value = RetentionPolicy.RUNTIME)
@Target({ ElementType.METHOD, ElementType.PARAMETER })
public @interface AllConfigMapEvents {
}

view raw
part11-6.java
hosted with ❤ by GitHub

…I will have expressed my intent that when we “see” this annotation:

@AllConfigMapEvents

…we are talking about choosing Kubernetes events in some handwavy manner, and not something else.

So now our as-yet-to-be-written portable extension has at least the theoretical means to find event parameters and beans and see if they’re qualified with qualifiers that, in turn, are annotated with @KubernetesEventSelector.  Those will be the only things of interest to this extension.

Next, we’ll take advantage of the CDI lifecycle and the prescribed order of things and recognize that bean processing happens before observer method processing.  So let’s start with looking for those beans that have Listable and VersionWatchable among their bean types.

We’re going to run into a problem right away.  We would like to look for beans whose bean types are of type X where X is some type that extends Listable and VersionWatchable, but otherwise we don’t really care what X actually is.  But producer methods must return concrete types without type variables in them.

What we’ll do instead is look for the most general type in the fabric8 Kubernetes model hierarchy that implements or extends both Listable and VersionWatchable and deal with that.  That type happens to be Operation.  We’ll keep track during bean processing of all Beans that match these types.  That means it doesn’t matter if the bean in question is a producer method, a producer field, a managed bean, a synthetic bean or any of the other kinds of beans you can cause to come into existence in a CDI world: they’ll all be accounted for.

Now, obviously we’re not actually interested in all beans whose bean types include Operation.  We’re interested in those that have been explicitly “tagged” with the KubernetesEventSelector annotation.

It would be nice if there were a convenience method of sorts in CDI that would let you ask a bean for its qualifiers but would restrict the returned set to only those qualifiers that, in turn, are meta-qualified with something else (like KubernetesEventSelector).  Unfortunately, there is not, so we’ll have to do this ourselves.

So, refined: we’ll keep track of Beans with Operation (and hence by definition Listable and VersionWatchable) among their bean types, and with KubernetesEventSelector-qualified qualifiers.  We’ll keep track of those annotations, too.

Once we do this, we’ll have some machinery that can now pick out sources for raw materials that we’ll need to feed to a Reflector.

But obviously we don’t really want to do any of this if there aren’t any observer methods around.  Fortunately but also unfortunately, bean processing happens before observer method processing, so we will do this work, and then discard it if we don’t find any relevant observer methods.

What’s a relevant observer method?  In this case it’s an observer method whose (a) AbstractEvent-typed event parameter is (b) qualified by one or more qualifier annotations that are themselves (c) meta-qualified by KubernetesEventSelector, and that (d) “matches” an equivalent set of qualifiers on an Operation-typed bean type.  This means that indirectly the observer method will “watch” for what the producer method is indirectly providing access to.

Here’s an example observer method:

import javax.enterprise.event.Observes;
import io.fabric8.kubernetes.api.model.HasMetadata;
import org.microbean.kubernetes.controller.Event;
private final void onConfigMapEvent(@Observes @AllConfigMapEvents final Event<? extends HasMetadata> event) {
}

view raw
part11-7.java
hosted with ❤ by GitHub

Note the usage of the @AllConfigMapEvents qualifier we defined earlier in this blog post, and note that the event parameter so qualified is an Event.  Let’s say we found a “matching” producer method:

import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.inject.Produces;
import io.fabric8.kubernetes.api.model.ConfigMap;
import io.fabric8.kubernetes.api.model.ConfigMapList;
import io.fabric8.kubernetes.api.model.DoneableConfigMap;
import io.fabric8.kubernetes.api.model.HasMetadata;
import io.fabric8.kubernetes.client.KubernetesClient;
import io.fabric8.kubernetes.client.dsl.Operation;
import io.fabric8.kubernetes.client.dsl.Resource;
@Produces
@ApplicationScoped
@AllConfigMapEvents
private static final Operation<ConfigMap, ConfigMapList, DoneableConfigMap, Resource<ConfigMap, DoneableConfigMap>> selectAllConfigMaps(final KubernetesClient client) {
return client.configMaps();
}

view raw
part11-8.java
hosted with ❤ by GitHub

Note the Operation return type and the matching qualifier-meta-qualified-with-KubernetesEventSelector that decorates it (@AllConfigMapEvents).  Do you see how it “matches” the observer method’s event parameter qualifier (also @AllConfigMapEvents)?  This is the linkage that connects this producer method with the observer method that observes events that will result from the producer method’s existence.

So once we have a mapping from a Bean to a set of relevant qualifier annotations, then we can see if, during observer method processing, we’re talking about the first observer method that should cause our boilerplate machinery to get invoked.  If there are many observer methods that “match” a relevant producer method, that’s fine; that just indicates that the very same Kubernetes event might be processed by many observers.  The point is, before we jump through all the hoops to set up lists and watches and so on, we should make sure that someone is actually going to take advantage of our work.

So once we “get a match”, we can build the boilerplate for all matches of the same type, once.

For each distinct match, we’ll need a new Controller.  The controller will need an instance of whatever the producer method is making (some kind of Operation, which is by definition both a Listable and a VersionWatchable).

The Controller, in turn, will need some kind of Map representing Kubernetes objects we “know about”.  That Map will be empty to start with, since we don’t know about any Kubernetes objects yet.  But if, down the road, a deletion gets skipped (maybe due to a watch timing out or a disconnect), we need to retroactively fire a delete if, say, the Kubernetes list of relevant objects is “less than” our “known objects”, and we’ll need to know what object was actually deleted.

The Controller will also need some kind of Consumer (a siphon) that will slurp up EventQueues and dispatch their events in order without blocking threads.  That Consumer will also need to update that very same Map of “known objects” as it does so.

Finally, that Consumer of EventQueues will also need a mechanism to link it to the native CDI event broadcast mechanism which will take care of actually dispatching events to observer methods.

Whew!  Let’s do the easy boring stuff first.

The first lifecycle event that looks good is ProcessBean.  If we could process all beans whose types are Operation here, in one place, regardless of whether the bean in question is a producer method, managed bean, etc. that would be great.  As it turns out for various obscure reasons, we can’t.  So we’ll implement a lifecycle event observer method for each type of bean, and route all their stuff to a common private method for processing:

private final <X extends Listable<? extends KubernetesResourceList> & VersionWatchable<? extends Closeable, Watcher<? extends HasMetadata>>> void processProducerMethod(@Observes final ProcessProducerMethod<X, ?> event, final BeanManager beanManager) {
if (event != null) {
// (We'll define this in a second.)
this.processPotentialEventSelectorBean(event.getBean(), beanManager);
}
}
private final <X extends Listable<? extends KubernetesResourceList> & VersionWatchable<? extends Closeable, Watcher<? extends HasMetadata>>> void processProducerField(@Observes final ProcessProducerField<X, ?> event, final BeanManager beanManager) {
if (event != null) {
this.processPotentialEventSelectorBean(event.getBean(), beanManager);
}
}
private final <X extends Listable<? extends KubernetesResourceList> & VersionWatchable<? extends Closeable, Watcher<? extends HasMetadata>>> void processManagedBean(@Observes final ProcessManagedBean event, final BeanManager beanManager) {
if (event != null) {
this.processPotentialEventSelectorBean(event.getBean(), beanManager);
}
}
private final <X extends Listable<? extends KubernetesResourceList> & VersionWatchable<? extends Closeable, Watcher<? extends HasMetadata>>> void processSyntheticBean(@Observes final ProcessSyntheticBean event, final BeanManager beanManager) {
if (event != null) {
this.processPotentialEventSelectorBean(event.getBean(), beanManager);
}
}

view raw
part11-1.java
hosted with ❤ by GitHub

And the processPotentialEventSelectorBean method that these all effectively delegate to might look like this:

private final void processPotentialEventSelectorBean(final Bean<?> bean, final BeanManager beanManager) {
if (bean != null) {
final Type operationType = getOperationType(bean);
if (operationType != null) {
final Set kubernetesEventSelectors = Annotations.retainAnnotationsQualifiedWith(bean.getQualifiers(), KubernetesEventSelector.class, beanManager);
if (kubernetesEventSelectors != null && !kubernetesEventSelectors.isEmpty()) {
this.eventSelectorBeans.put(kubernetesEventSelectors, bean);
}
}
}
}

view raw
part11-3.java
hosted with ❤ by GitHub

…where the Annotations utility method can be found elsewhere, and where the getOperationType() method might look like this:

private static final Type getOperationType(final Bean<?> bean) {
final Type returnValue;
if (bean == null) {
returnValue = null;
} else {
final Set<Type> beanTypes = bean.getTypes();
assert beanTypes != null;
assert !beanTypes.isEmpty();
Type candidate = null;
for (final Type beanType : beanTypes) {
if (beanType instanceof ParameterizedType) {
final Type rawType = ((ParameterizedType)beanType).getRawType();
if (rawType instanceof Class && Operation.class.equals((Class<?>)rawType)) {
candidate = beanType;
break;
}
}
}
returnValue = candidate;
}
return returnValue;
}

view raw
part11-2.java
hosted with ❤ by GitHub

So at the end of all this we’ll have a Map, eventSelectorBeans, that has as its keys Sets of Annotation instances that have KubernetesEventSelector somewhere “on” them, and has as its values Beans that are qualified with those annotations.

Now we’ll need to route all observer method processing to something similar (fortunately, this is a great deal simpler).  Recall that we are guaranteed by contract that observer method processing happens after discovering beans, so by this point we already have knowledge of any producer methods or managed beans “making” Operation instances of various kinds:

private final void processObserverMethod(@Observes final ProcessObserverMethod<? extends org.microbean.kubernetes.controller.Event<? extends HasMetadata>, ?> event, final BeanManager beanManager) {
if (event != null) {
this.processPotentialEventSelectorObserverMethod(event.getObserverMethod(), beanManager);
}
}
private final void processSyntheticObserverMethod(@Observes final ProcessSyntheticObserverMethod<? extends org.microbean.kubernetes.controller.Event<? extends HasMetadata>, ?> event, final BeanManager beanManager) {
if (event != null) {
this.processPotentialEventSelectorObserverMethod(event.getObserverMethod(), beanManager);
}
}
private final void processPotentialEventSelectorObserverMethod(final ObserverMethod<? extends org.microbean.kubernetes.controller.Event<? extends HasMetadata>> observerMethod, final BeanManager beanManager) {
if (observerMethod != null) {
final Set<Annotation> kubernetesEventSelectors = Annotations.retainAnnotationsQualifiedWith(observerMethod.getObservedQualifiers(), KubernetesEventSelector.class, beanManager);
if (kubernetesEventSelectors != null && !kubernetesEventSelectors.isEmpty()) {
if (observerMethod.isAsync()) {
if (!this.asyncNeeded) {
this.asyncNeeded = true;
}
} else if (!this.syncNeeded) {
this.syncNeeded = true;
}
this.beans.add(this.eventSelectorBeans.remove(kubernetesEventSelectors));
}
}
}

view raw
part11-4.java
hosted with ❤ by GitHub

(Above, we also keep track of whether the observer method in question is synchronous or asynchronous.)

Finally, we can clean up a little bit after bean discovery is all done:

private final void processAfterBeanDiscovery(@Observes final AfterBeanDiscovery event) {
if (event != null) {
this.eventSelectorBeans.clear();
}
}

view raw
part11-5.java
hosted with ❤ by GitHub

At the end of all of this bean discovery, we have a Set of Bean instances stored in a beans instance variable that (a) “make” Operation instances, (b) are appropriately meta-qualified to declare their interest in taking part in Kubernetes event production, and (c) have at least one observer method interested in their effects.

Let’s build some controllers and start them when the user’s program comes up!

We’ll begin by observing the initialization of the application scope, which is exactly equal to the start of a CDI application.

When this happens, our beans instance variable will contain some of the materials to build a Controller.  For each Bean in there, we’ll “instantiate” it (by way of a beanManager.getReference() call, create a new Controller, hook that Controller up to the right kind of EventDistributor, start the Controller and keep track of it for later cleanup:

private final <T extends HasMetadata, X extends Listable<? extends KubernetesResourceList> & VersionWatchable<? extends Closeable, Watcher<T>>> void startControllers(@Observes @Initialized(ApplicationScoped.class) @Priority(LIBRARY_AFTER) final Object ignored, final BeanManager beanManager) {
if (beanManager != null) {
for (final Bean<?> bean : this.beans) {
assert bean != null;
final Set<Annotation> qualifiers = bean.getQualifiers();
final Map<Object, T> knownObjects = new HashMap<>();
final EventDistributor<T> eventDistributor = new EventDistributor<>(knownObjects);
eventDistributor.addConsumer(new CDIEventDistributor<T>(qualifiers, null /* no NotificationOptions; TODO */, this.syncNeeded, this.asyncNeeded));
@SuppressWarnings("unchecked") // we know an Operation is of type X
final X contextualReference = (X)beanManager.getReference(bean, getOperationType(bean), beanManager.createCreationalContext(bean));
final Controller<T> controller = new Controller<>(contextualReference, knownObjects, eventDistributor);
controller.start();
this.controllers.add(controller);
}
}
}

view raw
part11-9.java
hosted with ❤ by GitHub

In the above code example, you’ll see a reference to a CDIEventDistributor.  That private class is a Consumer of events that simply fires them using the existing CDI broadcasting mechanism:

private static final class CDIEventDistributor<T extends HasMetadata> implements Consumer<AbstractEvent<? extends T>> {
private static final Annotation[] EMPTY_ANNOTATION_ARRAY = new Annotation[0];
private final Annotation[] qualifiers;
private final NotificationOptions notificationOptions;
private final boolean syncNeeded;
private final boolean asyncNeeded;
private CDIEventDistributor(final Set<Annotation> qualifiers, final NotificationOptions notificationOptions, final boolean syncNeeded, final boolean asyncNeeded) {
super();
if (qualifiers == null) {
this.qualifiers = EMPTY_ANNOTATION_ARRAY;
} else {
this.qualifiers = qualifiers.toArray(new Annotation[qualifiers.size()]);
}
this.notificationOptions = notificationOptions;
this.syncNeeded = syncNeeded;
this.asyncNeeded = asyncNeeded;
}
@Override
public final void accept(final AbstractEvent<? extends T> controllerEvent) {
if (controllerEvent != null && (this.syncNeeded || this.asyncNeeded)) {
final BeanManager beanManager = CDI.current().getBeanManager();
assert beanManager != null;
final javax.enterprise.event.Event<Object> cdiEventMachinery = beanManager.getEvent();
assert cdiEventMachinery != null;
final TypeLiteral<AbstractEvent<? extends T>> eventTypeLiteral = new TypeLiteral<AbstractEvent<? extends T>>() {
private static final long serialVersionUID = 1L;
};
final javax.enterprise.event.Event<AbstractEvent<? extends T>> broadcaster = cdiEventMachinery.select(eventTypeLiteral, this.qualifiers);
assert broadcaster != null;
if (this.asyncNeeded) {
if (this.notificationOptions == null) {
broadcaster.fireAsync(controllerEvent);
} else {
broadcaster.fireAsync(controllerEvent, this.notificationOptions);
}
}
if (this.syncNeeded) {
broadcaster.fire(controllerEvent);
}
}
}
}

view raw
part11-10.java
hosted with ❤ by GitHub

Finally, when the user’s application stops, we should clean up after ourselves.  We observe the imminent destruction of the application scope and close everything we started up:

private final void stopControllers(@Observes @BeforeDestroyed(ApplicationScoped.class) @Priority(LIBRARY_BEFORE) final Object ignored) throws IOException {
Exception exception = null;
for (final Controller<?> controller : this.controllers) {
assert controller != null;
try {
controller.close();
} catch (final IOException | RuntimeException closeException) {
if (exception == null) {
exception = closeException;
} else {
exception.addSuppressed(closeException);
}
}
}
if (exception instanceof IOException) {
throw (IOException)exception;
} else if (exception instanceof RuntimeException) {
throw (RuntimeException)exception;
} else if (exception != null) {
throw new IllegalStateException(exception.getMessage(), exception);
}
}

view raw
part11-11.java
hosted with ❤ by GitHub

The net effect is that if you put this portable extension and its dependencies on the classpath of your CDI 2.0 application and author a producer method and observer method pair (and nothing else), your CDI application will receive Kubernetes events and should behave semantically like any other kind of Kubernetes controller.

The fledgling project housing all this code is microbean-kubernetes-controller-cdi and is available on Github under the Apache 2.0 license.

Understanding Kubernetes’ tools/cache package: part 10—Designing Kubernetes controllers in CDI 2.0

In part 9 of this series (you may want to start at the beginning), we put together a sketch of what has become a Kubernetes controller framework in Java.  The framework represents most of the concerns of a subset of the Kubernetes tools/cache package, including the SharedIndexInformer, Reflector, Controller, Indexer, DeltaFIFO, Store and ListerWatcher concepts.

But if you want to write a controller, you don’t (I’m presuming) want to deal with any of that boilerplate.  You want to express that you:

  • are interested in receiving notifications of additions, modifications and deletions of certain Kubernetes resources
  • know what to do when you receive such notifications

The other stuff—all the reflectors and controllers and informers and caches and whatnot—is simply a means to an end, so we should attempt to make it disappear.

CDI 2.0 gives us the means to do this.  Let’s lay out what we’ll need to do.  We’ll work backwards.

First, we’ll make use of the standard observer method mechanism to know what to do when we receive event notifications.  We’ll sketch our as-of-now incomplete observer method like this:

private final void onConfigMapEvent(@Observes final SomeKindOfEventNotYetDefined configMapEvent) {
  // configMapEvent should tell us: was it an addition? a modification? a removal?
  // Which ConfigMap?  What did it look like before it was modified? etc.
}

Let’s pretend that something somewhere is firing these events.  Then we could receive one here, open it up, and see what happened.  Sounds good.

Missing from our sketch is some indication as to which ConfigMaps we’d like to see events for.  Maybe we want to see all of them.  Maybe we only are interested in some meeting certain criteria.

Well, that’s what qualifiers are made for.  We’ll refine our hazy little sketch to look like this:

private final void onConfigMapEvent(@Observes @CertainConfigMaps final SomeKindOfEventNotYetDefined configMapEvent) {
  // configMapEvent should tell us: was it an addition? a modification? a removal?
  // Which ConfigMap?  What did it look like before it was modified? etc.
}

Note the addition of the @CertainConfigMaps qualifier annotation that I just made up.

OK, if we stare at that for a while as good CDI citizens, this looks nice and complete and idiomatic: this observer method looks for SomeKindOfEventNotYetDefined events, whatever they are, qualified with the @CertainConfigMaps qualifier and will be notified if ever they should be fired.  Presumably those events, whatever they are, will contain a ConfigMap or an old ConfigMap and a new one along with some indication of what happened to it or them.  Sounds good.  This is stuff the end user obviously has to write, and it’s stuff that she should write, and it’s probably about as concise as we’re going to get.  It’s nice and declarative and simple and straightforward.

So who’s going to fire them?  The person writing this method obviously shouldn’t have to care.

As we’ve seen in the previous parts of this series, if you look at things a certain way, Kubernetes itself is a distributed message bus.  So the short answer is: Kubernetes will fire them.

That’s glib, of course.  Kubernetes will fire them—and our Reflector will do what is necessary to reflect them speedily and robustly into an EventCache.  Then something else will have to harvest events from the cache.  That something had also better be able to distribute those events to interested listeners, like our observer method above.

Finally, all of this will have to have queues of various kinds in the mix to make sure that no given harvesting thread or distributing thread or caching thread is blocked while it goes about its business (or that if absolutely necessary it is blocked for the shortest possible time).  That, after all, is what the Kubernetes client-go tools/cache package is all about, so it’s what microbean-kubernetes-controller is all about.

OK, that’s all fine, but, if you notice, this subassembly—with Kubernetes and a Reflector at its “left” end and our observer method at its “right” end—starts with a specification of things to list and watch.  If we can provide just that part, thus defining the inputs for the “left” end, then all the other stuff in the middle is boilerplate, and all we have to do is attach our observer method to the “right” end with a little more boilerplate, and the end user might not ever even have to see the boilerplate!

So let’s look at the “left” end: we know we’re going to have to talk to Kubernetes using the fabric8 Kubernetes client, so that’s where we’ll start.

The good news about picking the fabric8 Kubernetes client is that it is built around a domain-specific language that makes expressing things relatively easy.  For example, you can tell a KubernetesClient to get you a Thing™ that can then get you a list of ConfigMap resources from Kubernetes like this:

client.configMaps();

That returns some sort of massively parameterized MixedOperation, but normally when you program against a model like this you worry very little about these intermediate types in the chain of builders and whatnot that result from this style of fluent programming.

However, in this case, do take note of the fact that a MixedOperation implements both Listable and VersionWatchable.  Note further that a MixedOperation is just a special kind of Operation.  Finally, recall that our Reflector needs some kind of thing that is both a Listable and a VersionWatchable to get started: in part 0, we learned about why this is.

So, in other words, you can create a new Reflector that will reflect Kubernetes events concerning ConfigMaps by supplying client.configMaps() as the first parameter of its constructor.

That’s our “left” end of this subassembly.  If you can get your hands on a Thing™ that is both a Listable and a VersionWatchable then you can create the whole rest of the subassembly and route reflected events to appropriate observer methods.

Now, in CDI, any time you need a Thing™, you don’t make it yourself.  You look around for it, or you declare that someone else needs to provide it.  We will treat, in other words, an Operation as a CDI bean.

An Operation implementation is extremely unlikely to be a managed bean—a plain-old-Java-object (POJO) just lying around somewhere with some bean-defining annotations on it.  Instead, as we’ve seen, you make an Operation by way of a method invocation on a KubernetesClient (like configMaps()).

CDI producer methods are one kind of CDI bean that fit this bill perfectly!  Suppose we have a producer method somewhere sketched out like this:

@Produces
@ApplicationScoped
@CertainConfigMaps
private static final Operation<ConfigMap, ConfigMapList, DoneableConfigMap, Resource<ConfigMap, DoneableConfigMap>> selectAllConfigMaps(final KubernetesClient client) {
return client.configMaps();
}

Then so long as someone somewhere has made a KubernetesClient for us, it will be injected as this producer method’s sole parameter.  That means we can use it to return an Operation.  That means we have the “left” end of our subassembly.

(As it happens, there’s a handy little project that does expose a KubernetesClient as a CDI bean; it’s named microbean-kubernetes-client-cdi.  If that’s on your CDI 2.0 classpath, you can inject KubernetesClient wherever you want it.)

Note as well here that the @CertainConfigMaps annotation that was part of our (still unfinished) observer method sketch is qualifying the return type.

From here, you can see, I hope, that if we have a machine that can

  • gather up all the CDI beans that are qualified with a given qualifier (like @CertainConfigMaps) or set of qualifiers, regardless of how those beans are made or who makes them, and
  • can also gather up all the observer methods that observe events qualified with that same qualifier or those same qualifiers, then
  • we have the means to unite a producer of raw materials needed to pump Kubernetes events into the system, and a consumer of those events to get them out of the system!

We’ll build this machine in the next post.

 

Understanding Kubernetes’ tools/cache package: part 9—Kubernetes controllers in Java!

In part 8 of this series (you can start at the beginning if you want), we looked at some of the foundational principles and stakes in the ground we’re going to establish and plant in the ground as part of our journey towards making an idiomatic Kubernetes controller framework in Java and CDI 2.0.

Let’s look at an idiomatic implementation of these concepts centered around the fabric8 Kubernetes client implementation of Kubernetes watch functionality, and (eventually) targeted at a CDI 2.0 environment.

A Reflector‘s main purpose is to reflect a certain portion of the state of the Kubernetes universe into a cache of some kind.  It needs to do this in a fault-tolerant and thread-safe way, taking care not to block the thread it’s using to talk to Kubernetes.  As we’ve seen, this involves starting by listing objects meeting certain criteria and then setting up a watch for those objects and reacting to the logical events that occur to them by offloading them quickly into some kind of cache and scampering back to Kubernetes for more.  Then periodically a synchronization of downstream consumers with known state is performed, making sure that downstream consumers never miss any events concerning Kubernetes resources they care about, even if the listing and watching machinery suffers a hiccup.

The Go code around all this is relatively complicated.  Fortunately, though, since we’re using the excellent fabric8 Kubernetes client, we can take advantage of the fact that it already has sophisticated watch functionality that, among other things, handles reconnections for us.  So we don’t have to write code around all that: we just need to set up the listing code and provide a Watcher to be notified of watch events.  That will take care of listing and watching behavior and reduce a lot of code we have to write.  Hooray for laziness.

We can also take advantage of Java’s generics to ensure that we strongly type what objects we support (Pods, Deployments, ConfigMaps, and so on).  We can ensure that type information makes it all the way into the internals of our implementation.  The Go code needs to check this at runtime since Go has no support for generics.  That should simplify our code a little bit.  Hooray for more laziness and type safety.

A Reflector, as we said, needs to update a cache in a very specific way.  The Go code models the cache as a store of objects.  If we dig in a little more, though, it turns out that even in the Go code the Reflector only needs the mutating operations to be exposed, not the getters and listers.  Specifically, a reflector will look to add, update and delete objects in the store, and will also look to resynchronize state and to do a full replace of objects in one atomic shot.  But that’s it.

We can reduce this surface area in our idiomatic Java Reflector implementation even more by recognizing that if we start out by modeling the cache as a cache of logical events rather than a cache of arbitrary objects we only need to support one addition method that takes an event as a payload.  Then we have to support a total of three operations:

  • // Make an event from the supplied raw materials; store it
    public void add(final Object source, final Event.Type eventType, final T resource)
  • // Do a full replace of our state
    public void replace(final Collection<? extends T> objects, final Object resourceVersion)
  • // When told to synchronize, look at our known objects and
    // fire synchronization events downstream for each of them
    public void synchronize()

So from the standpoint of what a Reflector needs to do its job, we know what an event cache must look like at a minimum.  In fact, it might look like this EventCache interface from my microbean-kubernetes-controller project on Github!😀

Then our Reflector might look like my Reflector class in the same project.

We can see that it takes in something that is both Listable (and when it’s listed returns a KubernetesResourceList descendant) and VersionWatchable—which can be produced from a KubernetesClient.  Objects satisfying these requirements are roughly equivalent to the Go code’s ListerWatcher with, as we’ve noted, the added benefit that reconnections are automatically handled, and if you can express them properly, then filtering is already handled.  Hooray for even more laziness.

The Reflector constructor also takes in the writable “side” of a cache of Events.  This is where it will “mirror” the events it receives from watching the Kubernetes API server.  This corresponds roughly with the Go code’s notion of a Store, and more specifically with the notion of a DeltaFIFO, but exposes only the mutating methods actually needed by a writeable store of events, not any extras related to reading or listing.  Also, where a DeltaFIFO had to turn a generic “add an arbitrary object” function into an “add an event recording an addition” invocation, we start with the notion that everything being stored is an event, so it’s one fewer step we have to take and one fewer transformation we have to make.

So, implement an EventCache, call kubernetesClient.configMaps() or something equivalent, pass them in to a new Reflector, start it, and you can begin filling your cache with events!

Of course, you’ll want to drain that cache, and also allow it to periodically inform its downstream customers of its state for synchronization purposes.  And a good implementation will sort incoming events into queues, where each queue is effectively the events that have occurred to a particular keyed Kubernetes resource over time.

Fortunately, there is a good implementation.  The stock implementation of this kind of EventCache is EventQueueCollection, which models all this and another part of the Go Store type.  EventQueueCollection lets you pass in a Consumer and start it siphoning off events in a careful, thread-safe manner, much like the Pop() function in the Go code.  Events are stored in queues per object, keyed by an overridable method that determines how keys for a given Kubernetes resource are composed.

The stock Consumer normally used here, an implementation of ResourceTrackingEventQueueConsumer, is a simple Consumer that takes a Map of objects indexed by keys at construction time.  These “known objects” are updated as events are siphoned off the upstream EventQueueCollection, and then those individual events are passed on to the abstract accept() method.  Once an event arrives here, it can be processed directly, or rebroadcast, or whatever; all housekeeping has been taken care of.  We’ll come back to this.

Back to those known objects.  EventQueueCollection takes in the same kind of Map, and this is no coincidence.  Your consumer of events and your EventQueueCollection should share this cache: when your EventQueueCollection gets instructed to synchronize(), it will send synchronization events to any downstream consumers…

…like the ResourceTrackingEventQueueConsumer we were just talking about, who will update that very same Map of known objects appropriately.

This is all tedious boilerplate to set up, so just like the Go code, we put all of this behind a Controller façade.  A Controller marries a Reflector and an EventQueueCollection together with a simple Consumer of events, and arranges for them all to dance together.  If you start the Controller, it will start the Consumer, then start the Reflector, and you will start seeing events flow through the pinball machine.  If you close the Controller, it will close the Reflector, then close the Consumer and you will stop seeing events.  All overridable methods in Reflector andResourceTrackingEventQueueConsumer are “forwarded” into the Controller class, so you can customize it in one place.

So: create a Controller, give it an expression of the “real” Kubernetes resources you want to watch and list, hand it a consumer of mirrored versions of those events, and a Map representing the known state of the world, and you can now process Kubernetes resources just like Kubernetes controllers written in Go.

This is, of course, still too much boilerplate, since in all of this we still haven’t talked about business logic.

Therefore, in a CDI world, this is all a candidate for parking behind a portable extension.  That will be the topic of the next post.  Till then, have fun looking at the microbean-kubernetes-controller project on Github as it develops further.