CDI and Linking

So in the brave new world of JDK 9, you may have heard of its nascent module system.

A tutorial of the Java 9 module system is way beyond the scope of this blog, so see the Project Jigsaw page or the already quite outdated State of the Module System document and try not to accidentally read any more out-of-date documentation than you have to (it’s harder than you think).

Core to the module system is the concept of declaring what modules you (if you’re a module) require, what packages you export, what services you use and what service implementations you provide.

It struck me that CDI has something analogous that approaches such a module system already.  This is not revolutionary, but it helped me see a few things clearly. As always this blog is mainly a stream of consciousness, so buyer beware.

First, a CDI “module” is, if you squint hard enough, a bean archive. (A bean archive is, by default, a classpath location with a META-INF/beans.xml resource in it. There are other ways to define bean archives.)

Next, a bean archive does not require other archives, but does require bean implementations, in the sense that if an injection point “inside” it isn’t satisfied the container won’t come up.  So if my archive contains a bean that injects a Frobnicator, then hopefully some archive somewhere offers up, or produces, a Frobnicator implementation to satisfy that injection point.

A bean archive doesn’t explicitly export anything, but it does produce certain bean implementations.  This is a granular form of exporting.

(Finally, all of this looks and smells like service usage and service furnishing/provisioning, but of course takes context and object lifecycles and decoration and so forth into account so it’s a heck of a lot more powerful.)

OK so fine, there’s a loose analog between these two systems. Just for fun, let’s define some terms that will make hazy sense to CDI developers and Java 9 module system fanatics alike:

  • There’s requires, let’s say. That encompasses a bean using either @Inject or, say, an interceptor binding annotation or some other form of indicating a dependency on a bean or a bean-like object (like an interceptor or decorator).  So my bean archive can require something of your bean archive.
  • There’s produces, let’s say. That encompasses a bean archive offering up an implementation capable of satisfying an injection point. This could take the form of packaging up a managed bean, or producing something from a producer method, etc. Presumably bean archives that produce one thing satisfy bean archives that require that thing. So your bean archive produces the thing my bean archive requires.

If we stopped there, we have enough for basic usage.  Imagine a world where I require a Frobnicator, and you produce a Frobnicator implementation.  If I knew that, I would grab your project, put it on the classpath, and then when the container came up my Frobnicator injection point would be satisfied.  Things would be great.

Now expand that a bit. Let’s say that it’s not just you who offers up a Frobnicator implementation. Let’s say that woman over there also has a Frobnicator implementation. How did I find her stuff—and yours for that matter? How did I decide to make my CDI application (a pile of bean archives, exactly one of which creates an SeContainer) use her stuff instead of yours?

(If you’re like most Java developers, the answer is something like a noxious cocktail of StackOverflow and search.maven.org and stuff you read on Twitter or Reddit the other day. That’s crummy.)

Let’s expand this even more.  Suppose your archive contains a JAX-RS javax.ws.rs.core.Application bean.  There is some sense in which (using my previous terminology) you produce this bean, but there’s another sense in which it is not really usable properly unless the thing that uses it actually deploys it and serves it up in a specification-compliant manner.  That sense has to do with deployment, so let’s say that a bean archive can declare that it can deploy certain bean types, and another bean archive can say that it publishes certain bean types.  A published bean type, let’s say, is one that needs to be notionally deployed in an environment that is subordinate to and encapsulated from the overarching CDI container environment (like an HttpServer launched inside a CDI container; see a prior post of mine on this subject).

What I regard as a strength of CDI—that the Java module system authors regard as a weakness, I suppose, and with whom I strongly disagree, if so—is that CDI’s modular concepts are loosely coupled.  My bean archive can inject interfaces that a second regular jar file provides, and your bean archive can implement/produce them, and I don’t need to know about your archive at all until it comes time to choose what implementation of my injection points I want.

What if we could make this kind of discovery and linkage a little easier?

The Java module system has a tool called jlink.  It takes in a (locally present) Java 9 module graph and produces a hairball that in some deliberately unspecified way (they call it a “runtime image”) encapsulates all the modules found in a compact executable format.

What if there were a cdiLink tool that were kind of like that?

What if this cdiLink tool could somehow read some indexed data somewhere about your business logic pieces, and about various (say) Maven Central artifacts and identify all of these things as bean archives producing and requiring and deploying and publishing various bean types?

If you go back and read some of my previous posts on composition-based programming with CDI 2.0, you can see that thanks to the way that CDI discovers bean archives that are locally present you don’t even need to write, say, your own main() method. Someone else can do that and make a bean archive available on Maven Central (say) with it in there.

If you read some of my previous posts on politely blocking the CDI container, you can see that you don’t even have to write or deploy your own container-or-server-reliant application. Someone else can do that and make it available on Maven Central (say) with it in there.

So now imagine if a bean archive could declare, in some easily indexable manner, that it requires certain bean types, and produces other bean types.  Imagine further that some repository could be queried for such things by a cdiLink tool, and a user could then select between various producers interactively.

Then you could take your (extraordinarily minimal!) application (maybe just a JAX-RS endpoint and nothing else!), and cdiLink it as part of the development process.

cdiLink might go through the following, one or more times, using my terminology above:

  • I noticed you have a JAX-RS Application class and its attendant root resource classes.  I know [who knows how, haven’t gotten that far yet] that this means you are publishing the Application bean type.
  • That means I have to find someone in the universe who deploys beans of type Application.
  • Ah! I have found a Jersey-and-Grizzly-based bean archive Out There In The World™ that claims that it deploys Application instances.  I have also found a Netty-and-RestEasy-based bean archive that claims it does the same.  Which would you like to use?  The Jersey-based one?  Very well, I’ll use that one.
  • Next, I have found two basic boilerplate implementations of the standard CDI container startup pattern.  One is called Fred, and the other is called Joe.  Would you like to use Fred, or Joe?  If you don’t care, I’ll pick Fred.  OK.  Fred it is.
  • Finally, of course, which CDI implementation do you want to use? Weld or OpenWebBeans? Weld? OK.  (Obviously I’ll include any transitive dependencies it has!)
  • Please hold while I assemble your executable hairball [which may just be a classpath; recall that one of the bean archives has a main class in it].
  • OK, the hairball is built.  Here it is.  Just run it.

I think this could be quite powerful, especially when combined with the easy ability to add other bean archives to the classpath (perhaps with interceptors, decorators and alternatives that are more suitable for the final runtime environment the resulting application might find itself in).

Furthermore, the whole tool itself might be implementable as a portable extension.

OK, that’s enough sketchy thoughts for one evening.  Thanks for reading.

Blocking the CDI container politely

If, like me, you’re looking for a way to run other containers (like web servers) inside a CDI container, you may have been frustrated by the annoying propensity of the Java Virtual Machine to exit helpfully when no non-daemon Threads remain alive.

(If you’re not yet like me, you may want to start with my post series CDI 2.0 and read up to here.)

Specifically, if you try to launch, say, an HttpServer, then you will discover that the simplest way to do this will not prevent the CDI container from simply exiting right out from under you, since it spawns daemon Threads that you can’t get a handle on which don’t prevent the Java Virtual Machine from exiting.  So the only non-daemon thread in existence will be the container thread, and it will run to completion, and then your HttpServer daemon threads will all be killed and then the Java Virtual Machine will exit and you will be out of luck.

You could just (as I did, misguidedly, in an earlier proof of concept) join() on the main container thread.  But you don’t want to do this.  You don’t really want to mindlessly block the CDI container thread, since you don’t really know where in its lifecycle you’re doing the blocking.  Also, there may be other CDI beans that wish to be notified of context initialization and may want to do things that have nothing to do with your container. Finally, Weld developers become very concerned if you even suggest doing such a thing. 😀

And you don’t want to get in the way of any shutdown hooks that might be installed by certain industry-leading CDI implementations at particular moments in the portable extension lifecycle.

Lastly, you want to make sure that CTRL-C still works on any program you execute that uses CDI 2.0, and that it doesn’t prevent the normal CDI container lifecycle cleanup from running normally.

Here’s how I did it.

First, you want the blocking behavior to be such that if you were to unblock it in some way, the regular container shutdown semantics would still happen.  That is, @BeforeDestroyed(ApplicationScoped.class) events and BeforeShutdown portable extension events would still be fired normally.

And, as we said, you want the blocking behavior not to actually prevent the main CDI thread from doing its business in its ordinary way.  (Part of its business might very well be to install shutdown hooks, which will be the only way to unblock things if you wish later on.)

So at some level really you don’t want to block the main container thread at all.  But at another hazy level, you do.  Paradox!

To resolve this contradiction, we’re going to have to block some other non-daemon thread so that the JVM won’t exit, but the main container loop can still do its normal business.

But merely starting a non-daemon Thread and then blocking it won’t work.  True, the Java Virtual Machine won’t exit, but the container thread—now not being blocked—will simply very quickly run to completion, and now we’ll have a bunch of HttpServer daemon threads out there that were started from within a CDI container but now don’t have a CDI container in play, and a blocked thread, and an unresponsive Java Virtual Machine, and probably sixteen other kinds of horrible disasters just waiting to happen. So we are not going to do this. 😀

Instead, it would be really nice if the container could somehow manage or otherwise be aware of this other non-daemon thread so that blocking it would somehow happen only during the “open for business” portion of the container lifecycle, and in a legal way.  Then we would achieve our seemingly paradoxical goals of both blocking the CDI container and not blocking the CDI container’s main thread.

Fortunately, there is a way to do this: use asynchronous events.

The general approach will be to define a portable extension that does the following things:

  1. Creates a CountDownLatch with an initial count of 1 and stores it as an instance variable.  This will be our blocking mechanism.
  2. Installs a shutdown hook that, should it ever be called, will simply call countDown() on the latch.
  3. Starts an HttpServer (or HttpServers) when the application scope is initialized.  This will spawn a daemon thread that we don’t have any control over.  Then we’ll store the HttpServer so we can refer to it later.
  4. Fires an asynchronous event that can be received only by the portable extension that indicates, basically, that we’re done starting servers.
  5. In the asynchronous observer, by definition on a different thread, call await() on the CountDownLatch, thus blocking the observer thread, which is managed by the container.

Most of the above is pretty straightforward, but the asynchronous event reception is worth a closer look.

The CDI specification tells us this about firing asynchronous events:

Event fired [sic] with the fireAsync() method is fired asynchronously. All the resolved asynchronous observers (as defined in Observer resolution) are called in one or more different threads.

The language here is not very specific.  If we have six asynchronous observers, will they all be called on one additional thread or six additional threads, or, say, three?  Luckily in our case this ambiguity doesn’t matter, as we will define the only possible asynchronous observer for the event we’re going to fire, and we’re guaranteed that the thread it runs on will be “different” from the main CDI container thread.

Furthermore, although the specification says nothing about this, we can surmise that the container will be biased in favor of letting an asynchronous observer thread run to completion, since obviously it has no idea what that thread is doing.  That means, in other words, that the container will almost certainly block while our asynchronous observer thread is still alive, waiting for it to finish.  The nice thing is that the container is in charge of deciding when to block, which is exactly what we want.

So back to the recipe.  Here’s our constructor, satisfying the first two steps in our recipe:


private final CountDownLatch latch;
private final Collection<HttpServer> startedHttpServers;
public HttpServerStartingExtension() {
super();
this.startedHttpServers = new LinkedList<>();
this.latch = new CountDownLatch(1);
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
latch.countDown();
}));
}

Next, we’ll start our HttpServers when the application scope comes up:


private final void startHttpServers(@Observes @Initialized(ApplicationScoped.class) @Priority(LIBRARY_AFTER) final Object event, final BeanManager beanManager) throws IOException, InterruptedException {
if (beanManager != null) {
final Instance<Object> beans = beanManager.createInstance();
assert beans != null;
final Instance<HttpServer> httpServers = beans.select(HttpServer.class);
assert httpServers != null;
if (!httpServers.isUnsatisfied()) {
synchronized (this.startedHttpServers) {
for (final HttpServer httpServer : httpServers) {
if (httpServer != null) {
// This asynchronous method starts a daemon thread in
// the background; see
// https://github.com/GrizzlyNIO/grizzly-mirror/blob/2.3.x/modules/http-server/src/main/java/org/glassfish/grizzly/http/server/HttpServer.java#L816
// and work backwards to the start() method. Among
// other things, that won't prevent the JVM from exiting
// normally. Think about that for a while. We'll need
// another mechanism to block the CDI container from
// simply shutting down.
httpServer.start();
// We store our own list of started HttpServers to use
// in other methods in this class rather than relying on
// Instance<HttpServer> because we don't know what scope
// the HttpServer instances in question are. For
// example, they might be in Dependent scope, which
// would mean every Instance#get() invocation might
// create a new one.
this.startedHttpServers.add(httpServer);
}
}
if (!this.startedHttpServers.isEmpty()) {
// Here we fire an *asynchronous* event that will cause
// the thread it is received on to block. This is key:
// the JVM will be prevented from exiting, and the thread
// that the event is received on is (a) a non-daemon
// thread and (b) managed by the container. This is, in
// other words, a clever way to take advantage of the CDI
// container's mandated thread management behavior so
// that the CDI container stays up for at least as long
// as the HttpServer we spawned above.
//
// TODO: fire the event
//
}
}
}
}
}

At line 43 above you can see that we need to fire an event signaling that we’re done starting servers.  We’re in a portable extension, so we’ll need to do this from the BeanManager—a portable extension’s container lifecycle event observing observer method may have only two parameters, and the second parameter, if present, must be of type BeanManager. We also want to make sure that our portable extension is the only possible source of this event and the only possible receiver.

So first we’ll define a simple event object as a private inner class:


private static final class BlockingEvent {
private BlockingEvent() {
super();
}
}

And we’ll set up an asynchronous observer method:


private final void block(@ObservesAsync final BlockingEvent event) throws InterruptedException {
this.latch.await();
}

…and now we’ll replace that TODO item above with the one-liner that will fire the asynchronous event:


beanManager.getEvent().select(BlockingEvent.class).fireAsync(new BlockingEvent());

Putting the starting method together, then, it looks like this:


private final void startHttpServers(@Observes @Initialized(ApplicationScoped.class) @Priority(LIBRARY_AFTER) final Object event, final BeanManager beanManager) throws IOException, InterruptedException {
if (beanManager != null) {
final Instance<Object> beans = beanManager.createInstance();
assert beans != null;
final Instance<HttpServer> httpServers = beans.select(HttpServer.class);
assert httpServers != null;
if (!httpServers.isUnsatisfied()) {
synchronized (this.startedHttpServers) {
for (final HttpServer httpServer : httpServers) {
if (httpServer != null) {
// This asynchronous method starts a daemon thread in
// the background; see
// https://github.com/GrizzlyNIO/grizzly-mirror/blob/2.3.x/modules/http-server/src/main/java/org/glassfish/grizzly/http/server/HttpServer.java#L816
// and work backwards to the start() method. Among
// other things, that won't prevent the JVM from exiting
// normally. Think about that for a while. We'll need
// another mechanism to block the CDI container from
// simply shutting down.
httpServer.start();
// We store our own list of started HttpServers to use
// in other methods in this class rather than relying on
// Instance<HttpServer> because we don't know what scope
// the HttpServer instances in question are. For
// example, they might be in Dependent scope, which
// would mean every Instance#get() invocation might
// create a new one.
this.startedHttpServers.add(httpServer);
}
}
if (!this.startedHttpServers.isEmpty()) {
// Here we fire an *asynchronous* event that will cause
// the thread it is received on to block. This is key:
// the JVM will be prevented from exiting, and the thread
// that the event is received on is (a) a non-daemon
// thread and (b) managed by the container. This is, in
// other words, a clever way to take advantage of the CDI
// container's mandated thread management behavior so
// that the CDI container stays up for at least as long
// as the HttpServer we spawned above.
beanManager.getEvent().select(BlockingEvent.class).fireAsync(new BlockingEvent());
}
}
}
}
}

Finally, more or less orthogonally to all this, let’s define what should happen when the CDI container shuts down, no matter how that should happen:


private final void stopHttpServers(@Observes final BeforeShutdown event) {
synchronized (this.startedHttpServers) {
for (final HttpServer httpServer : this.startedHttpServers) {
if (httpServer != null && httpServer.isStarted()) {
httpServer.shutdownNow();
}
}
}
}

So we know that no matter how the container decides to shut down, this logic will ensure that we at least make an attempt to gracefully shut down any HttpServer we might have started.

If we put it all together:

  • If for any reason at any point CTRL-C is pressed, then we effectively disable all blocking behavior (or unblock any thread we’re currently blocking).  We do this by using CountDownLatch‘s facilities.
  • We start HttpServers and stash them away for later shutdown.  Daemon threads out of our control are created and started by Grizzly.
  • We fire an asynchronous event that causes the CDI container to start or allocate a managed thread.  On that thread, we block, using the await() call of the CountDownLatch.  This prevents the CDI container from shutting down until CTRL-C is received.
  • Finally, we define what happens if the CDI container shuts down for any reason—namely we shut down any HttpServers we’ve started.

I hope you can see that this allows the running of a container within a CDI container while allowing the CDI container to function and shut down normally.

Dynamic CDI Producer Methods

I’m exploring some of my configuration ideas and ran into a pattern that I recognize from prior experience and wanted to document how to resolve it here.

I started off by implementing things without regard to CDI.  So just programmatic APIs, getters, setters, builders—that sort of thing.

In my case, I have a set of Converters.  A Converter is something that can take a String in and convert it to a different kind of Object.  The Converter also stores the Type that it serves as a converter for.

So I have indexed these Converters in a Map under their Type.  If you have a Type, you can get a Converter and use it to transform a raw configuration value.  Simple.

The Converters come from a META-INF/services/com.foo.bar.Converter entry, so they are effectively chosen by the user.

How could I integrate this subsystem into CDI? That is, instead of making the user get a Type and do a programmatic lookup (anytime you see the word “lookup” you should be suspicious), can I allow her to just ask CDI to inject a properly converted value?

Well, the first thought is: producer methods.  A Converter can produce some kind of object, and the kind of object it can produce is encoded in its Type, so the pseudocode looks like this:

@Produces
@ConfigurationValue // or whatever
public  T produceConvertedValue(final InjectionPoint injectionPoint) {
  final Type type = injectionPoint.getType();
  final Converter converter = getTheRightConverter(type);
  return converter.convert(getTheRightStringValue(injectionPoint));
}

Ah, but producer methods must return a concrete type!  That is, the method above can’t return T: it has to return Integer, or Long, or Double, or whatever the actual type is.

But if our converters are stored as META-INF/services/com.foo.bar.Converter entries, we don’t know when we’re writing this method what the concrete types are!  How are we supposed to write a number of producer methods for a set of Types when we don’t know how many Types there are?

This is one of those patterns that should ring the portable extension bell.  Any time you have some “setup” to do based on user-supplied configuration or information that you don’t have available to you when you are writing code—any time this occurs, you should be thinking portable extension.  As I (previously) and Antoine Sabot-Durand have shown, they are not difficult or esoteric or something that only experts can write.  They are simply how you do things dynamically in CDI—where you do setup work before the container comes up and is locked down and cannot be tweaked further.  This is one of those times.

Let’s think about what we want to do.  We want to first make sure this fledgling configuration framework is available in CDI—i.e. that the thing that houses the Set of Types and their related Converters is available as a CDI bean.  As I said above, I designed things originally so that they have nothing to do with CDI, so no annotations, so that means we’ll have to programmatically add the main Converter housing object (called Configurations, as it happens) as a bean. That’s really easy.

Then we are going to want to get its set of Types somehow and do some work for each of them.  Specifically, for each of them we want to install the equivalent of a producer method that returns objects of that Type.  We also want these producer methods to be able to satisfy injection points like the following:

@Inject
@ConfigurationValue("frobnicationInterval")
private Integer frobnicationInterval;

If we do all this successfully, then the user can just ask for a configuration value of a particular type, and if the String that is the configuration value can be converted to that type, then it will be, and the user will simply get the converted value. If the user changes the META-INF/services/com.foo.bar.Converter entries in some sort of bad way, like by removing an entry for IntegerConverter, then the CDI container will detect this at container startup time. This is all good stuff and very CDI-ish.

So now we can survey our toolbox. We’re in portable extension land, so we should be looking at container lifecycle events and the BeanManager.

The object that houses the Converters is called Configurations in my fledgling framework, so we need to make it be a CDI bean in a particular scope. We can get it to be added to the container by programmatically sticking an ApplicationScoped annotation on it (since ApplicationScoped is a bean-defining annotation):

private final void addConfigurations(@Observes final BeforeBeanDiscovery event) {
  if (event != null) {
    event.addAnnotatedType(Configurations.class, "configurations")
      .add(ApplicationScoped.Literal.INSTANCE);
  }
}

What’s that "configurations" String doing there?  When you add an annotated type, you specify the Class you’re adding, but remember that this will result in a new AnnotatedType object.  You can have many of these per Class (that’s a little mind-bending, but remember you can add and remove annotations programmatically—that’s the reason), so you need to provide a way to talk about the type in particular that you’re adding here.  That takes the form of a String identifier, which here we’ve just made up—we just use "configurations".  You can read what little information there is on this model of things in the meager discussion around CDI-58.  At any rate, we’re just doing a very simple add of a single AnnotatedType here, and then we’re not actually going to use that identifier again, so in some sense it hardly matters.

So here we’ve “made it look like” Configurations was annotated all along with the ApplicationScoped annotation.  Since this is during the BeforeBeanDiscovery event, this means that now as the container starts its work of locating CDI beans, it will pick this one up.  We’ve accomplished our first goal.

The second goal is trickier.  First, what facilities do we have to work with producer methods?

Part of navigating the CDI API landscape is to think about these things:

  • Are you in portable extension land?
  • Are you reacting to some sort of lifecycle event?
  • Are you doing some sort of programmatic bean lookup?

We know we’re in portable extension land.  We also know that we’ll have to do a programmatic bean lookup, because we need to get the Configurations object (that houses our Converters and Types).  Finally, we know we’re going to need to add some producer methods in some way, so this is usually done in reaction to a lifecycle event (we’ll see which one shortly).

Recall that we’re trying to create producer methods dynamically based off a Set of Types obtainable from the Configurations object.  So first, let’s just get that Set of Types.

To do this, we’ll need a Configurations object.  It would be nice to simply ask for one to be injected, but we’re in portable extension land, and that means the container isn’t really up yet.  We’ve pointed it in the direction of the Configurations class (see the code above), so if it were up, we could inject a Configurations object, but, well, we’re out of luck here.  That means programmatic bean lookup, and so that means BeanManager operations.

The BeanManager is the handle you get to the CDI container itself, even while it’s coming up.  Any portable extension can observe a container lifecycle event, and specify a BeanManager-typed second parameter in its observer method.  If it does this, then the BeanManager will be supplied.  Easy.  Armed with a BeanManager, we can call its createInstance() method, and now we have an Instance, which means that now we can use it to select(Configurations.class).  That will give us an Instance, and then we can call its get() method, and we’ll get a CDI-managed Configurations object.

OK, we’ll file that away: we know now how to look up a Configurations object so we can get its Set of Types.  Then, once we have that Set, we have to loop over it and programmatically add producer methods.

If we’re going to programmatically add beans (producer methods are beans), then there’s really only one container lifecycle event that supports that, and that’s AfterBeanDiscovery. This means the container has completed whatever automatic scanning it was set up to do (which may be none), and, in the absence of any portable extension doing anything, is about to start validating things so it can actually start.

In our case, we are going to do something: we’re going to add a bunch of beans!

Let’s start by making our observer method, and in it let’s get a Configurations object:


private final void installConfigurationValueProducerMethods(@Observes final AfterBeanDiscovery event, final BeanManager beanManager) {
if (event != null && beanManager != null) {
final Instance<Object> i = beanManager.createInstance();
assert i != null;
final Instance<Configurations> configurationsInstance = i.select(Configurations.class);
assert configurationsInstance != null;
if (configurationsInstance.isResolvable()) {
final Configurations configurations = configurationsInstance.get();
assert configurations != null;
// TODO: get Types, add producer methods for them, etc.!
}
}
}
}

Now the hard part.

So the AfterBeanDiscovery event would seem to be our savior.  It has the addBean() method, which returns a BeanConfigurator, which, among other things, has a produceWith(Function) method! And the first parameter to that function is an Instance, which could give us any parameters we might otherwise write in a “normal” producer method! So we could just supply an appropriate function, make a few select() calls, and boom, there’s our dynamic producer method!

Alas.

In our case, if we were writing a “normal” producer method, one of the parameters we would need that method to have is an InjectionPoint describing the site of injection for which the producer method will provide satisfaction.  In the function supplied to the produceWith(Function) method—the candidate dynamic producer method—if you try to do instance.select(InjectionPoint.class), you do not get an InjectionPoint that describes the place for which your dynamic producer function will provide values.  You get some weird InjectionPoint that describes something about the Instance object itself, which is of course completely unsuitable.  So produceWith(Function) is out.

Good grief; so now what?

Let’s write a “normal” producer method, just to make some headway, but we’ll just say it returns Object, and we won’t add the @Produces annotation.  We’ll call this our almost-producer method.  We’ll put this method in our extension class:


// Note: no @Produces!
@ConfigurationValue
@Dependent
private static final Object produceConfigurationValue(final InjectionPoint injectionPoint, final Configurations configurations) {
Objects.requireNonNull(injectionPoint);
Objects.requireNonNull(configurations);
final String name = getConfigurationPropertyName(injectionPoint);
assert name != null;
return configurations.getValue(name, injectionPoint.getType()); // let's say this causes conversion
}

If we were to add a @Produces annotation here, the container would happily accept this.  The problem is it would only work for injection points where the type was (exactly) Object, which is not what we want:

@Inject
@ConfigurationValue("frobnicationInterval")
private Object frobnicationInterval; // this will work, but…

But in all other ways, this is a suitable producer method—and hence a suitable CDI bean.  To think about this a little differently, all we need to do is to create a CDI bean with a specific bean type that uses this method as its “producer body”.

Fortunately, the BeanManager has two methods here that will help us out greatly.

The first is the createBeanAttributes(AnnotatedMember) method. To understand this, let’s talk briefly about BeanAttributes.

A CDI bean is fundamentally two things:

  1. A piece of descriptive information that says what its types are, what annotations it has, and so on.  A BeanAttributes represents this part.
  2. Some means of producing its instances so that a CDI Context can manage those instances.  There are various ways of representing this part.

As we said above, for each Type available in our Configurations object, we’re trying to “create a CDI bean with a specific bean type”—namely, that Type—that uses a particular method we’ve already written (see above) as a means of creating its instances.

The createBeanAttributes(AnnotatedMember) method, then, basically introspects a producer method (or an “almost-producer” method, in our case), derives information about it and represents that information in a new BeanAttributes.  Sounds good!

Working backwards, we’ll need to get an AnnotatedMethod to represent that almost-producer method we wrote. Then we can call the createBeanAttributes(AnnotatedMember) method on it. The recipe looks like this:


// ConfigurationsExtension is our Extension class within which all this activity is taking place.
final AnnotatedType<ConfigurationsExtension> thisType = beanManager.createAnnotatedType(ConfigurationsExtension.class);
final AnnotatedMethod<? super ConfigurationsExtension> producerMethod = thisType.getMethods().stream()
.filter(m -> m.getJavaMember().getName().equals("produceConfigurationValue"))
.findFirst()
.get();
final BeanAttributes<?> producerAttributes = beanManager.createBeanAttributes(producerMethod);

If we inspect that producerAttributes object, we will see that the return value of its getTypes() method only has Object in it (reflecting the return type of our almost-producer method).  If we could somehow this value to become an arbitrary type of our choosing, then we have all the raw materials we need.  Let’s write a very simple delegating BeanAttributes class:


public class DelegatingBeanAttributes<T> implements BeanAttributes<T> {
private final BeanAttributes<?> delegate;
public DelegatingBeanAttributes(final BeanAttributes<?> delegate) {
super();
Objects.requireNonNull(delegate);
this.delegate = delegate;
}
@Override
public String getName() {
return this.delegate.getName();
}
@Override
public Set<Annotation> getQualifiers() {
return this.delegate.getQualifiers();
}
@Override
public Class<? extends Annotation> getScope() {
return this.delegate.getScope();
}
@Override
public Set<Class<? extends Annotation>> getStereotypes() {
return this.delegate.getStereotypes();
}
@Override
public Set<Type> getTypes() {
return this.delegate.getTypes();
}
@Override
public boolean isAlternative() {
return this.delegate.isAlternative();
}
@Override
public String toString() {
return this.delegate.toString();
}
}

So now we have the means to create a BeanAttributes describing our new bean (the bean describing the kinds of things produced by our almost-producer method), and we have a method that can actually create things of that kind.

To link them together into a real CDI bean, we need to make use of the BeanManager#createBean(BeanAttributes, Class, ProducerFactory) method. We’ll supply this with a DelegatingBeanAttributes instance that supplies the right Type in the return value of its getTypes() method, our extension class, and…wait, what’s a ProducerFactory?

A ProducerFactory is (very briefly put) a programmatic representation of a producer.  Producers in general come in two flavors: producer fields and producer methods.  Recall that producers are beans: things with metadata (types, annotations, etc.) and means of production.  A producer field is a bean whose metadata comes from details about the field itself (its type, its annotations) and whose means of production is, quite simply, the field itself.  Similarly, a producer method is a bean whose metadata comes from details about the method itself and the return value of the method (its type, the method’s annotations) and whose means of production is an invocation of the method.

So what, do we have to write one ourselves?  No, not exactly.  Once again, we can ask the container for the ProducerFactory it would use behind the scenes if our almost-producer method were in fact a producer method.  You do this by calling the BeanManager#getProducerFactory(AnnotatedMethod, Bean) method.

The first parameter: hey, we know how to get that. We actually have it in our hands already, since we needed an AnnotatedMethod for the container to give us a BeanAttributes for it.

The second parameter: this is the bean housing the producer method you’re trying to get the ProducerFactory for. In our case, our almost-producer method is static, so there’s no instance of any object that has to be created so that this almost-producer method can be invoked on it. So we can pass null here.

Let’s pause in all of this and take stock.

  • We’re in a lifecycle method that lets us add things to the container.
  • We have a set of Types that we’ve dynamically discovered that we want to do things with.
  • We have the means of creating a BeanAttributes representing a producer method of an arbitrary type.
  • We have the means of marrying this BeanAttributes together with a “template method” to create a CDI bean of a particular type.

So let’s do it!


private final void installConfigurationValueProducerMethods(@Observes final AfterBeanDiscovery event, final BeanManager beanManager) {
if (event != null && beanManager != null) {
// Get a Configurations object.
final Instance<Object> i = beanManager.createInstance();
assert i != null;
final Instance<Configurations> configurationsInstance = i.select(Configurations.class);
assert configurationsInstance != null;
if (configurationsInstance.isResolvable()) {
final Configurations configurations = configurationsInstance.get();
assert configurations != null;
// Find out its set of Types.
final Set<Type> types = configurations.getConversionTypes();
if (types != null && !types.isEmpty()) {
// Create a BeanAttributes representing our almost-producer method.
final AnnotatedType<ConfigurationsExtension> thisType = beanManager.createAnnotatedType(ConfigurationsExtension.class);
final AnnotatedMethod<? super ConfigurationsExtension> producerMethod = thisType.getMethods().stream()
.filter(m -> m.getJavaMember().getName().equals("produceConfigurationValue"))
.findFirst()
.get();
final BeanAttributes<?> producerAttributes = beanManager.createBeanAttributes(producerMethod);
for (final Type type : types) {
assert type != null;
// For each type, create a Bean representing it.
// The bean will be logically comprised of the BeanAttributes we made above, but
// with its type set appropriately, and a ProducerFactory representing our
// almost-producer method. The combination will make this a "real" Bean
// thus turning our almost-producer method into a "real" producer method.
final Bean<?> bean =
beanManager.createBean(new DelegatingBeanAttributes<Object>(producerAttributes) {
@Override
public final Set<Type> getTypes() {
final Set<Type> types = new HashSet<>();
types.add(Object.class);
types.add(type);
return types;
}
},
ConfigurationsExtension.class,
beanManager.getProducerFactory(producerMethod, null /* null OK; producer method is static */));
// Add the Bean representing the almost-producer method to the container;
// the almost-producer method will now handle injection points of the
// current type!
event.addBean(bean);
}
}
}
}
}

I cannot emphasize enough how powerful this is.

We’ve taken a dynamic set of user-supplied information, and set the container up so that it can ensure that typesafe resolution will apply even over this set.

This general pattern I’m sure can be widely applied and I hope to have more to say on it in the future.  Thanks for reading!

More Thoughts on Configuration

This is a distillation of the thoughts I had in my previous piece on configuration.

The key insight for me was that application configuration takes place within a configuration value space with one or more dimensions or axes.

Next, these dimensions or axes are not hierarchical (after all, how could a dimension or axis be hierarchical?) and have nothing to do with the name of the configuration setting (other than that it can be considered to be one of the dimensions or axes).

Following on from all of that, an application asks for configuration values suitable for where it sits in multidimensional configuration space.  (I know this sounds like I’m five miles above the earth, but I really think this is important.)  I like to call this location its configuration coordinates.  There are always more coordinates than you think (something like locale often slips through the cracks!) but the total number is still probably under 20 or 10 for almost any application.  Many simple applications probably have somewhere around only two or three.

Next, a (flexible, well-written) application is typically unaware of most (if not all) of its own configuration coordinates.  (An application, in other words, from the standpoint of its running code, doesn’t “know” whether it’s in the test environment or not.)  But these coordinates are always, whether you know it or not, implicit in a semantic configuration request.

Next, configuration systems or sources are capable of supplying configuration values that are more or less specific along one or more of the configuration space axes.  A configuration system might supply a value that is maximally specific along the name axis (db.url), and minimally specific along (but suitable for) all possible other axes (environment, phase, region, etc.).  The holy grail is a value that is exactly suited for all of and exactly those configuration coordinates expressed by the application. Non-holy grails are configuration values that are still suitable by virtue of applying to a wide range of configuration coordinates.  This is the only place where hierarchies come into play: for any given configuration axis, a more-specific configuration value always trumps (I hate that word) a less-specific configuration value, but if a more-specific value does not exist, then the less-specific value is suitable.

This lets us start theorizing clumsily about a procedure where an application can ask for configuration values suitable for its configuration coordinates:

  • The application asks the configuration system for a value for the db.url configuration key and expects a String back.
  • The configuration system figures out what the application’s coordinates are, and re-expresses the request in terms of those coordinates.  db.url becomes a value for the—I don’t know—configurationKey axis.
  • The configuration system grabs all of its sources or subsystems or providers or whatever they should be called.
    • For each one, it asks it for a configuration value suitable for the supplied coordinates—and expects back, in return, not just the value, but also the coordinates for which it is explicitly suited.  Also of note: if the subsystem returns a value with no coordinates, then this value matched, but minimally specifically.  It’s suitable, but as a last resort.
    • Each subsystem responds appropriately.
  • The configuration system loops through the results.  Any result is presumed or enforced to be a match for at least the (hypothetical) configurationKey axis!  (That is, if I ask for foo, and I get a value for bar back, regardless of other coordinates, something went wrong.)
    • If there are no results, then there are no matches, and either null is returned or an error is thrown.
    • If there is exactly one result, then the value is returned.
    • If there is exactly one exact match, then the value is returned.
    • If there is more than one exact match, then an error is thrown.
    • If there are no exact matches:
      • The configuration system sorts the results in terms of specificity.  There are some details to work out here, but loosely and hazily and sketchily speaking if a set of configuration coordinates is expressed as a Map, then a Map with a bigger size (assuming the same universe of possible keys) is more specific, let’s say, than a Map with a smaller size. For example, {a=b, c=d} is a more specific set of coordinates (let’s say) than {a=b}.  The mere presence of a value indicates some kind of match, so it is possible for a configuration subsystem to return a value with, say, empty configuration coordinates.  This would indicate that the match is least specific.  So the specificity of a configuration value can be said, I think, to be equal to the number of configuration coordinates it reports as having matched on.  Tersely, the greater the specificity, the greater the Map‘s size.
      • If there is exactly one value with a given specificity, then it is returned—it is a suitable match.
      • If there is more than one value with the same specificity of coordinates, then this represents a misconfiguration and an error is thrown. For example, one subsystem might report that it has a value for {b=c, d=e} and another might report that it has a value for {d=e, f=g} when asked for a value suitable for {b=c, d=e, f=g}. Since configuration axes are not (by definition) hierarchical, this represents a misconfiguration.

OK, I think I’ll leave it there for now and chew on this over the weekend.

Thoughts on Configuration

So my previous pieces on CDI qualifiers has somehow led me into thinking very deeply about configuration, so here’s some stream-of-consciousness on the subject.

I’m aware that the de facto standard in this space is DeltaSpike’s configuration extension, and it works just fine as far as it goes.

I’ve also worked with Netflix’s Archaius, and way back in the day some of the Apache configuration libraries.

I’ve always been faintly bothered that at the heart of all these systems is (sometimes explicitly, sometimes just kind of present in “flavor”) a configuration hierarchy: you stack configurations and then different layers of the hierarchy supply different values.

But—and I might be wrong here—one thing that I’ve become more and more convinced of is that real world configuration is not a hierarchy.

We’ve already seen that there are some implicit qualifiers in a CDI application that end up being part of the configuration coordinates of the application.  That is, made up qualifiers like @Production (for environment or project stage), @Experimental (for phase or canary testing) and @UsWest (for something like region or datacenter) together with possibly other implicit qualifiers identify your application in configuration space.

Those things aren’t hierarchical. They’re cooperating aspects of the configuration space of the system.  Your configuration space may be one-dimensional (no test environment, no funky project stage, no data center things to worry about), or five- or sixteen-dimensional (your application can be deployed into all sorts of places within a huge configuration space).

Here’s how it seems to me that an interaction goes with such a configuration system:

My application: Hello, I would like a value for db.url please.
Configuration system: Certainly. I actually have several—some specific, some not.  Where are you?  Who are you?
My application: OK, well, I know that my configuration coordinates are phase=experimental, environment=production and region=uswest. Maybe that will help you.
Configuration system: Yes. Please hold. {Heads over to a remote corner of the office.} Hi, configuration source one?
Configuration source one: Yes?
Configuration system: Can you get me a value for db.url in the uswest region suitable for the production environment and the experimental phase?
Configuration source one: Let me see…no, but I could give you a value suitable for environment=production and phase=experimental…but I don’t have anything more specific than that (i.e. explicitly for region=uswest).  So the value I gave you might not be maximally specific, but that might be OK for you?
Configuration system: OK, that might work. Hang on. Configuration source two?
Configuration source two: Yeah, I heard you guys. I can give you a value for all three aspects!
Configuration system: Great! Thanks. Configuration source one, you’re off the hook.

The point here is not the witty dialogue, but the fact that the most specific value wins, not the value that has a particular place in a hierarchy.

You could conceive of a situation with that same dialogue above, but one configuration source can give you a value suitable for the region and phase, and another can give you a value suitable for the phase and environment, but none can give you a maximally specific value.  Which one of those (region, phase) is more primordial in a hierarchy?  Answer: neither!  They’re two axes of configuration.  This thought experiment means only that your configuration is basically underspecified, not that some arbitrary source in a hierarchy should win.  You might want to do different things in the case of an underspecified configuration.  Probably you actually want to notify someone that values aren’t set right.

Or, consider the same dialogue, but this time all you get is two out of three (region and environment, let’s say).  In that case, it might be perfectly reasonable to take the value offered (i.e. since there aren’t any conflicts (which is more primordial? region or environment?), you can just take the value knowing that it’s as specific as you’re going to get).  Maybe the db.url configuration value varies only along the environment and region axes and not along the phase axis.  That might be fine.

Anyway, where am I going with this?  I am not sure, but, again, it sounds like the germs of a configuration system that would plug nicely into CDI.  Stay tuned.

CDI Qualifiers are Values, Part 2

In a previous post I wrote that CDI qualifiers are values.

After some more thinking, there are two classes of CDI qualifiers: those that are values for implicit aspects (arbitrary examples: @Synchronous (a value for synchronicity), @Yellow (a value for color)), and rarer ones—those that are both aspects and values for those aspects (as an example, @Named is the aspect; the value of its value element is the qualification value).  There aren’t a lot of the latter, but they do exist.

If you want to write a producer method that works with the latter class of qualifiers, then you must look at the element of the qualifier that represents its value and see if it is meta-annotated with @Nonbinding or not.

If it is, then you are all set: you can just put the qualifier on your producer method, and now your producer method will be able to produce all possible values for that qualifier.

If it is not, then in order to write producer methods you must know in advance of all the members of the domain over which the qualifier’s value element can range.

Consider @Named as an example.  It is a qualifier that does not represent a value.  It represents an aspect.  Its value element holds the value that does the actual qualifying.  But its value element is not meta-annotated with @Nonbinding, so that means there’s no way to write one producer method that can produce items for injection points qualified with the @Named qualifier with arbitrary values.  That is, if your producer method is annotated with @Named(""), then the only injection point it will satisfy is one that is also  qualified with @Named("").  If it is annotated with @Named("fred"), then it will produce objects suitable only for injection points that are also annotated with @Named("fred").

So if you knew all the possible names your application could ever use, you could write a producer method for each of them, annotated appropriately (one annotated with @Named("fred"), one annotated with @Named("alice"), and so on).  Of course, if you knew that, you could just define your own “regular” qualifier annotations: @Fred, @Alice and so on, with an implicit aspect of name.

Whoa, you think; this is cumbersome and in some cases impossible!

Let’s invent a qualifier of the second class called @Labeled, with a single value element. It’s basically kind of like @Named, but with one important difference: on this qualifier we’ll add @Nonbinding to its value element.

Now all you have to do is mark your producer method with @Labeled, and you’ve effectively said, “I’ll take care of anything annotated with @Labeled, no matter what the requested value is.”  You’d better do that, because otherwise the container won’t start up.

This is of course very convenient and the way that a lot of configuration frameworks proceed.  Imagine a configuration framework backed by such a producer method: if someone asks for a String value to be injected at an injection point annotated with @Labeled("fred"), then it is easy for the producer method to go hunting for that value (however it does it) and return it.

There is a faint sense, though, in which this sort of thing doesn’t quite fit the CDI mindset. CDI “wants” you to know all your values in advance whenever possible: that’s why qualifiers are usually values themselves. Then the wiring is utterly explicit: the producer method annotated with @Yellow satisfies injection points also annotated with @Yellow. There is a way in which a producer method annotated with just @Labeled and an injection point annotated with @Labeled("fred") don’t quite plug into that way of thinking.  I have a hazy (possibly misguided) sense that there’s a better way of doing this sort of thing, but I don’t know what it is yet.

Anyway, all of this can serve as a helpful litmus test for clarifying your thinking and identifying which of your qualifiers need @Nonbinding on their elements and which do not.