Calling Maven Artifact Resolver From Within CDI 2.0

So I’ve been continuing to play with my CDI-and-linking idea, and central to it is the ability to locate CDI “modules” Out There In The World™.  The world, in this case, is Maven Central.  Well, or perhaps a local mirror of it.  Or maybe your employer’s private Nexus repository fronting it.  Oh jeez, we’re going to have to really use Maven’s innards to do this, aren’t we?

As noted earlier, even just figuring out what innards to use is hard.  So I figured out that the project formerly known as Æther, Maven Artifact Resolver, whose artifact identifier is maven-resolver, is the one to grab.

Then, upon receiving it and opening it up, I realized that the whole thing is driven by Guice—or, if you aren’t into that sort of thing, by a homegrown service locator (which itself is a service, which leads to all sorts of other Jamie Zawinski-esque questions).

The only recipes left over are from the old Æther days and require a bit of squinting to make work.  They are also staggeringly complicated.  Here’s a gist that downloads the (arbitrarily selected) org.microbean:microbean-configuration-cdi:0.1.0 artifact and its transitive, compile-scoped dependencies, taking into account local repositories, the user’s Maven ~/.m2/settings.xml file, active Maven profiles and other things that we all take for granted:

That seems like an awful lot of work to have to do just to get some stuff over the wire.  It also uses the cheesy homegrown service locator which as we all know is not The Future™.

For my purposes, I wanted to junk the service locator and run this from within a CDI 2.0 environment, both because it would be cool and dangerous and unexpected, and because the whole library was written assuming dependency injection in the first place.

So I wrote a portable extension that basically does the job that the cheesy homegrown service locator does, but deferring all the wiring and validation work to CDI, where it belongs.

As if this whole thing weren’t hairy enough already, a good number of the components involved are Plexus components.  Plexus was a dependency injection framework and container from a ways back now that also had a notion of what constituted beans and injection points.  They called them components and requirements.

So a good portion of some of the internal Maven objects are annotated with Component and Requirement.  These correspond roughly—very, very roughly—to bean-defining annotations and injection points, respectively.

So I wrote two portable extension methods.  One uses the role element from Component to figure out what kind of Typed annotation to add to Plexus components.  The other turns a Requirement annotation with a hint into a valid CDI injection point with an additional qualifier.

(Along the way, these uncovered MNG-6190, which indicates that not very many people are even using the Maven Artifact Resolver project in any way, or at least not from within a dependency injection container, which is, of course, how it is designed to be used.  That’s a shame, because although it is overengineered and fiddly to the point of being virtually inscrutable, it is, as a result, perhaps, quite powerful.)

Then the rest of the effort was split between finding the right types to add into the CDI container, and figuring out how to adapt certain Guice-ish conventions to the CDI world.

The end result is that the huge snarling gist above gets whittled down to five or so lines of code, with CDI doing the scope management and wiring for you.

This should mean that you can now relatively easily incorporate the guts of Maven into your CDI applications for the purposes of downloading and resolving artifacts on demand.  See the microbean-maven-cdi project for more information.  Thanks for reading.

Maven and the Project Formerly Known As Æther

I am hacking and exploring my ideas around CDI and linking and part of whatever that might become will be Maven artifact resolution.

If, like me, you’ve been working with Maven since the 1.0 days (!), you may be amused by the long, tortured path the dependency resolution machine at its core has taken over the years.

First, there was Maven artifact resolution baked into the core.

Then Jason decided that it might be better if this were broken off into its own project.  So it became Sonatype Æther and rapidly grew to incorporate every feature under the sun except an email client.  It got transferred to Sonatype which was the company he was running at the time.

Then people got very enamored of Eclipse in general, and so Æther, without any changes, got moved to the Eclipse Foundation.  Then the package names changed and a million trillion confused developers tried to figure things out on StackOverflow.

Then it turned out that no one other than Maven and maybe some of Jason’s companies’ work was using Æther in either Sonatype or Eclipse form, so Eclipse Æther has just recently been—wait for it—folded back into Maven.  Of course, the Eclipse Æther page doesn’t (seem to?) mention this, and if you try to go to its documentation page then at least as of this writing you get a 500 error.  If you hunt a little bit harder, you find another related page that finally tells you the whole thing has been archived.

So anytime you see the word Æther you should now think Maven Artifact Resolver.

But make sure that you don’t therefore think that the name of the actual Maven artifact representing this project is maven-artifact-resolver, because that is the old artifact embodying the 2009 version of Maven’s dependency resolution code!  The new artifact name for the Maven Artifact Resolver project is simply maven-resolver.  Still with me?

Fine. So you’ve navigated all this and now you want to work with Maven Artifact Resolver (maven-resolver), the project formerly known as Some Kind of Æther.

If, like me, you want to use the machinery outside of Maven proper, then you are probably conditioned, like me, to look for some sort of API artifact. Sure enough, there is one.  (Note that for all the history I’ve covered above, the package names confusingly still start with org.eclipse.aether.)

Now you need to know what implementation to back this with. This is not as straightforward as it seems. The Maven project page teases you with a few hints, but it turns out that merely using these various other Maven projects probably won’t get you where you want to go, which in most cases is probably a Maven-like experience (reading from .m2/settings.xml files, being able to work with local repositories, etc. etc.).  (Recall that the project formerly known as Some Kind of Æther and now known as Maven Artifact Resolver expanded to become very, very flexible at the expense of being simple to use, so it can read from repositories that aren’t just Maven repositories, so it inherently doesn’t know anything about Maven, even though now it has been folded back into the Maven codebase.)

Fortunately, in the history of all this, there was a class called MavenRepositorySystemUtils.  If you search for this, you’ll find its javadocs, but these are not the javadocs you’re looking for.  Let’s pretend for a moment they were: you would, if you used this class, be able to get a RepositorySystem rather easily:

final ServiceLocator serviceLocator = MavenRepositorySystemUtils.newServiceLocator();
assert serviceLocator != null;
final RepositorySystem repositorySystem = serviceLocator.getService(RepositorySystem.class);
assert repositorySystem != null;

But these javadocs are for a class that uses Eclipse Æther, so no soup for you.

So now what?  It turns out that that class still exists and has been refactored to use Maven Artifact Resolver (maven-resolver), but its project page and javadocs and whatnot don’t exist (yet?).  The artifact you’re looking for, then, turns out to be maven-resolver-provider, and as of this writing exists only in its 3.5.0-alpha-1 version.

So if you make sure that artifact is available then you’ll be able to use the Maven Artifact Resolver APIs to interact with Maven repositories and download and resolve dependencies.

CDI and Linking

So in the brave new world of JDK 9, you may have heard of its nascent module system.

A tutorial of the Java 9 module system is way beyond the scope of this blog, so see the Project Jigsaw page or the already quite outdated State of the Module System document and try not to accidentally read any more out-of-date documentation than you have to (it’s harder than you think).

Core to the module system is the concept of declaring what modules you (if you’re a module) require, what packages you export, what services you use and what service implementations you provide.

It struck me that CDI has something analogous that approaches such a module system already.  This is not revolutionary, but it helped me see a few things clearly. As always this blog is mainly a stream of consciousness, so buyer beware.

First, a CDI “module” is, if you squint hard enough, a bean archive. (A bean archive is, by default, a classpath location with a META-INF/beans.xml resource in it. There are other ways to define bean archives.)

Next, a bean archive does not require other archives, but does require bean implementations, in the sense that if an injection point “inside” it isn’t satisfied the container won’t come up.  So if my archive contains a bean that injects a Frobnicator, then hopefully some archive somewhere offers up, or produces, a Frobnicator implementation to satisfy that injection point.

A bean archive doesn’t explicitly export anything, but it does produce certain bean implementations.  This is a granular form of exporting.

(Finally, all of this looks and smells like service usage and service furnishing/provisioning, but of course takes context and object lifecycles and decoration and so forth into account so it’s a heck of a lot more powerful.)

OK so fine, there’s a loose analog between these two systems. Just for fun, let’s define some terms that will make hazy sense to CDI developers and Java 9 module system fanatics alike:

  • There’s requires, let’s say. That encompasses a bean using either @Inject or, say, an interceptor binding annotation or some other form of indicating a dependency on a bean or a bean-like object (like an interceptor or decorator).  So my bean archive can require something of your bean archive.
  • There’s produces, let’s say. That encompasses a bean archive offering up an implementation capable of satisfying an injection point. This could take the form of packaging up a managed bean, or producing something from a producer method, etc. Presumably bean archives that produce one thing satisfy bean archives that require that thing. So your bean archive produces the thing my bean archive requires.

If we stopped there, we have enough for basic usage.  Imagine a world where I require a Frobnicator, and you produce a Frobnicator implementation.  If I knew that, I would grab your project, put it on the classpath, and then when the container came up my Frobnicator injection point would be satisfied.  Things would be great.

Now expand that a bit. Let’s say that it’s not just you who offers up a Frobnicator implementation. Let’s say that woman over there also has a Frobnicator implementation. How did I find her stuff—and yours for that matter? How did I decide to make my CDI application (a pile of bean archives, exactly one of which creates an SeContainer) use her stuff instead of yours?

(If you’re like most Java developers, the answer is something like a noxious cocktail of StackOverflow and search.maven.org and stuff you read on Twitter or Reddit the other day. That’s crummy.)

Let’s expand this even more.  Suppose your archive contains a JAX-RS javax.ws.rs.core.Application bean.  There is some sense in which (using my previous terminology) you produce this bean, but there’s another sense in which it is not really usable properly unless the thing that uses it actually deploys it and serves it up in a specification-compliant manner.  That sense has to do with deployment, so let’s say that a bean archive can declare that it can deploy certain bean types, and another bean archive can say that it publishes certain bean types.  A published bean type, let’s say, is one that needs to be notionally deployed in an environment that is subordinate to and encapsulated from the overarching CDI container environment (like an HttpServer launched inside a CDI container; see a prior post of mine on this subject).

What I regard as a strength of CDI—that the Java module system authors regard as a weakness, I suppose, and with whom I strongly disagree, if so—is that CDI’s modular concepts are loosely coupled.  My bean archive can inject interfaces that a second regular jar file provides, and your bean archive can implement/produce them, and I don’t need to know about your archive at all until it comes time to choose what implementation of my injection points I want.

What if we could make this kind of discovery and linkage a little easier?

The Java module system has a tool called jlink.  It takes in a (locally present) Java 9 module graph and produces a hairball that in some deliberately unspecified way (they call it a “runtime image”) encapsulates all the modules found in a compact executable format.

What if there were a cdiLink tool that were kind of like that?

What if this cdiLink tool could somehow read some indexed data somewhere about your business logic pieces, and about various (say) Maven Central artifacts and identify all of these things as bean archives producing and requiring and deploying and publishing various bean types?

If you go back and read some of my previous posts on composition-based programming with CDI 2.0, you can see that thanks to the way that CDI discovers bean archives that are locally present you don’t even need to write, say, your own main() method. Someone else can do that and make a bean archive available on Maven Central (say) with it in there.

If you read some of my previous posts on politely blocking the CDI container, you can see that you don’t even have to write or deploy your own container-or-server-reliant application. Someone else can do that and make it available on Maven Central (say) with it in there.

So now imagine if a bean archive could declare, in some easily indexable manner, that it requires certain bean types, and produces other bean types.  Imagine further that some repository could be queried for such things by a cdiLink tool, and a user could then select between various producers interactively.

Then you could take your (extraordinarily minimal!) application (maybe just a JAX-RS endpoint and nothing else!), and cdiLink it as part of the development process.

cdiLink might go through the following, one or more times, using my terminology above:

  • I noticed you have a JAX-RS Application class and its attendant root resource classes.  I know [who knows how, haven’t gotten that far yet] that this means you are publishing the Application bean type.
  • That means I have to find someone in the universe who deploys beans of type Application.
  • Ah! I have found a Jersey-and-Grizzly-based bean archive Out There In The World™ that claims that it deploys Application instances.  I have also found a Netty-and-RestEasy-based bean archive that claims it does the same.  Which would you like to use?  The Jersey-based one?  Very well, I’ll use that one.
  • Next, I have found two basic boilerplate implementations of the standard CDI container startup pattern.  One is called Fred, and the other is called Joe.  Would you like to use Fred, or Joe?  If you don’t care, I’ll pick Fred.  OK.  Fred it is.
  • Finally, of course, which CDI implementation do you want to use? Weld or OpenWebBeans? Weld? OK.  (Obviously I’ll include any transitive dependencies it has!)
  • Please hold while I assemble your executable hairball [which may just be a classpath; recall that one of the bean archives has a main class in it].
  • OK, the hairball is built.  Here it is.  Just run it.

I think this could be quite powerful, especially when combined with the easy ability to add other bean archives to the classpath (perhaps with interceptors, decorators and alternatives that are more suitable for the final runtime environment the resulting application might find itself in).

Furthermore, the whole tool itself might be implementable as a portable extension.

OK, that’s enough sketchy thoughts for one evening.  Thanks for reading.

Blocking the CDI container politely

If, like me, you’re looking for a way to run other containers (like web servers) inside a CDI container, you may have been frustrated by the annoying propensity of the Java Virtual Machine to exit helpfully when no non-daemon Threads remain alive.

(If you’re not yet like me, you may want to start with my post series CDI 2.0 and read up to here.)

Specifically, if you try to launch, say, an HttpServer, then you will discover that the simplest way to do this will not prevent the CDI container from simply exiting right out from under you, since it spawns daemon Threads that you can’t get a handle on which don’t prevent the Java Virtual Machine from exiting.  So the only non-daemon thread in existence will be the container thread, and it will run to completion, and then your HttpServer daemon threads will all be killed and then the Java Virtual Machine will exit and you will be out of luck.

You could just (as I did, misguidedly, in an earlier proof of concept) join() on the main container thread.  But you don’t want to do this.  You don’t really want to mindlessly block the CDI container thread, since you don’t really know where in its lifecycle you’re doing the blocking.  Also, there may be other CDI beans that wish to be notified of context initialization and may want to do things that have nothing to do with your container. Finally, Weld developers become very concerned if you even suggest doing such a thing. 😀

And you don’t want to get in the way of any shutdown hooks that might be installed by certain industry-leading CDI implementations at particular moments in the portable extension lifecycle.

Lastly, you want to make sure that CTRL-C still works on any program you execute that uses CDI 2.0, and that it doesn’t prevent the normal CDI container lifecycle cleanup from running normally.

Here’s how I did it.

First, you want the blocking behavior to be such that if you were to unblock it in some way, the regular container shutdown semantics would still happen.  That is, @BeforeDestroyed(ApplicationScoped.class) events and BeforeShutdown portable extension events would still be fired normally.

And, as we said, you want the blocking behavior not to actually prevent the main CDI thread from doing its business in its ordinary way.  (Part of its business might very well be to install shutdown hooks, which will be the only way to unblock things if you wish later on.)

So at some level really you don’t want to block the main container thread at all.  But at another hazy level, you do.  Paradox!

To resolve this contradiction, we’re going to have to block some other non-daemon thread so that the JVM won’t exit, but the main container loop can still do its normal business.

But merely starting a non-daemon Thread and then blocking it won’t work.  True, the Java Virtual Machine won’t exit, but the container thread—now not being blocked—will simply very quickly run to completion, and now we’ll have a bunch of HttpServer daemon threads out there that were started from within a CDI container but now don’t have a CDI container in play, and a blocked thread, and an unresponsive Java Virtual Machine, and probably sixteen other kinds of horrible disasters just waiting to happen. So we are not going to do this. 😀

Instead, it would be really nice if the container could somehow manage or otherwise be aware of this other non-daemon thread so that blocking it would somehow happen only during the “open for business” portion of the container lifecycle, and in a legal way.  Then we would achieve our seemingly paradoxical goals of both blocking the CDI container and not blocking the CDI container’s main thread.

Fortunately, there is a way to do this: use asynchronous events.

The general approach will be to define a portable extension that does the following things:

  1. Creates a CountDownLatch with an initial count of 1 and stores it as an instance variable.  This will be our blocking mechanism.
  2. Installs a shutdown hook that, should it ever be called, will simply call countDown() on the latch.
  3. Starts an HttpServer (or HttpServers) when the application scope is initialized.  This will spawn a daemon thread that we don’t have any control over.  Then we’ll store the HttpServer so we can refer to it later.
  4. Fires an asynchronous event that can be received only by the portable extension that indicates, basically, that we’re done starting servers.
  5. In the asynchronous observer, by definition on a different thread, call await() on the CountDownLatch, thus blocking the observer thread, which is managed by the container.

Most of the above is pretty straightforward, but the asynchronous event reception is worth a closer look.

The CDI specification tells us this about firing asynchronous events:

Event fired [sic] with the fireAsync() method is fired asynchronously. All the resolved asynchronous observers (as defined in Observer resolution) are called in one or more different threads.

The language here is not very specific.  If we have six asynchronous observers, will they all be called on one additional thread or six additional threads, or, say, three?  Luckily in our case this ambiguity doesn’t matter, as we will define the only possible asynchronous observer for the event we’re going to fire, and we’re guaranteed that the thread it runs on will be “different” from the main CDI container thread.

Furthermore, although the specification says nothing about this, we can surmise that the container will be biased in favor of letting an asynchronous observer thread run to completion, since obviously it has no idea what that thread is doing.  That means, in other words, that the container will almost certainly block while our asynchronous observer thread is still alive, waiting for it to finish.  The nice thing is that the container is in charge of deciding when to block, which is exactly what we want.

So back to the recipe.  Here’s our constructor, satisfying the first two steps in our recipe:

Next, we’ll start our HttpServers when the application scope comes up:

At line 43 above you can see that we need to fire an event signaling that we’re done starting servers.  We’re in a portable extension, so we’ll need to do this from the BeanManager—a portable extension’s container lifecycle event observing observer method may have only two parameters, and the second parameter, if present, must be of type BeanManager. We also want to make sure that our portable extension is the only possible source of this event and the only possible receiver.

So first we’ll define a simple event object as a private inner class:

And we’ll set up an asynchronous observer method:

…and now we’ll replace that TODO item above with the one-liner that will fire the asynchronous event:

Putting the starting method together, then, it looks like this:

Finally, more or less orthogonally to all this, let’s define what should happen when the CDI container shuts down, no matter how that should happen:

So we know that no matter how the container decides to shut down, this logic will ensure that we at least make an attempt to gracefully shut down any HttpServer we might have started.

If we put it all together:

  • If for any reason at any point CTRL-C is pressed, then we effectively disable all blocking behavior (or unblock any thread we’re currently blocking).  We do this by using CountDownLatch‘s facilities.
  • We start HttpServers and stash them away for later shutdown.  Daemon threads out of our control are created and started by Grizzly.
  • We fire an asynchronous event that causes the CDI container to start or allocate a managed thread.  On that thread, we block, using the await() call of the CountDownLatch.  This prevents the CDI container from shutting down until CTRL-C is received.
  • Finally, we define what happens if the CDI container shuts down for any reason—namely we shut down any HttpServers we’ve started.

I hope you can see that this allows the running of a container within a CDI container while allowing the CDI container to function and shut down normally.

Dynamic CDI Producer Methods

I’m exploring some of my configuration ideas and ran into a pattern that I recognize from prior experience and wanted to document how to resolve it here.

I started off by implementing things without regard to CDI.  So just programmatic APIs, getters, setters, builders—that sort of thing.

In my case, I have a set of Converters.  A Converter is something that can take a String in and convert it to a different kind of Object.  The Converter also stores the Type that it serves as a converter for.

So I have indexed these Converters in a Map under their Type.  If you have a Type, you can get a Converter and use it to transform a raw configuration value.  Simple.

The Converters come from a META-INF/services/com.foo.bar.Converter entry, so they are effectively chosen by the user.

How could I integrate this subsystem into CDI? That is, instead of making the user get a Type and do a programmatic lookup (anytime you see the word “lookup” you should be suspicious), can I allow her to just ask CDI to inject a properly converted value?

Well, the first thought is: producer methods.  A Converter can produce some kind of object, and the kind of object it can produce is encoded in its Type, so the pseudocode looks like this:

@Produces
@ConfigurationValue // or whatever
public  T produceConvertedValue(final InjectionPoint injectionPoint) {
  final Type type = injectionPoint.getType();
  final Converter converter = getTheRightConverter(type);
  return converter.convert(getTheRightStringValue(injectionPoint));
}

Ah, but producer methods must return a concrete type!  That is, the method above can’t return T: it has to return Integer, or Long, or Double, or whatever the actual type is.

But if our converters are stored as META-INF/services/com.foo.bar.Converter entries, we don’t know when we’re writing this method what the concrete types are!  How are we supposed to write a number of producer methods for a set of Types when we don’t know how many Types there are?

This is one of those patterns that should ring the portable extension bell.  Any time you have some “setup” to do based on user-supplied configuration or information that you don’t have available to you when you are writing code—any time this occurs, you should be thinking portable extension.  As I (previously) and Antoine Sabot-Durand have shown, they are not difficult or esoteric or something that only experts can write.  They are simply how you do things dynamically in CDI—where you do setup work before the container comes up and is locked down and cannot be tweaked further.  This is one of those times.

Let’s think about what we want to do.  We want to first make sure this fledgling configuration framework is available in CDI—i.e. that the thing that houses the Set of Types and their related Converters is available as a CDI bean.  As I said above, I designed things originally so that they have nothing to do with CDI, so no annotations, so that means we’ll have to programmatically add the main Converter housing object (called Configurations, as it happens) as a bean. That’s really easy.

Then we are going to want to get its set of Types somehow and do some work for each of them.  Specifically, for each of them we want to install the equivalent of a producer method that returns objects of that Type.  We also want these producer methods to be able to satisfy injection points like the following:

@Inject
@ConfigurationValue("frobnicationInterval")
private Integer frobnicationInterval;

If we do all this successfully, then the user can just ask for a configuration value of a particular type, and if the String that is the configuration value can be converted to that type, then it will be, and the user will simply get the converted value. If the user changes the META-INF/services/com.foo.bar.Converter entries in some sort of bad way, like by removing an entry for IntegerConverter, then the CDI container will detect this at container startup time. This is all good stuff and very CDI-ish.

So now we can survey our toolbox. We’re in portable extension land, so we should be looking at container lifecycle events and the BeanManager.

The object that houses the Converters is called Configurations in my fledgling framework, so we need to make it be a CDI bean in a particular scope. We can get it to be added to the container by programmatically sticking an ApplicationScoped annotation on it (since ApplicationScoped is a bean-defining annotation):

private final void addConfigurations(@Observes final BeforeBeanDiscovery event) {
  if (event != null) {
    event.addAnnotatedType(Configurations.class, "configurations")
      .add(ApplicationScoped.Literal.INSTANCE);
  }
}

What’s that "configurations" String doing there?  When you add an annotated type, you specify the Class you’re adding, but remember that this will result in a new AnnotatedType object.  You can have many of these per Class (that’s a little mind-bending, but remember you can add and remove annotations programmatically—that’s the reason), so you need to provide a way to talk about the type in particular that you’re adding here.  That takes the form of a String identifier, which here we’ve just made up—we just use "configurations".  You can read what little information there is on this model of things in the meager discussion around CDI-58.  At any rate, we’re just doing a very simple add of a single AnnotatedType here, and then we’re not actually going to use that identifier again, so in some sense it hardly matters.

So here we’ve “made it look like” Configurations was annotated all along with the ApplicationScoped annotation.  Since this is during the BeforeBeanDiscovery event, this means that now as the container starts its work of locating CDI beans, it will pick this one up.  We’ve accomplished our first goal.

The second goal is trickier.  First, what facilities do we have to work with producer methods?

Part of navigating the CDI API landscape is to think about these things:

  • Are you in portable extension land?
  • Are you reacting to some sort of lifecycle event?
  • Are you doing some sort of programmatic bean lookup?

We know we’re in portable extension land.  We also know that we’ll have to do a programmatic bean lookup, because we need to get the Configurations object (that houses our Converters and Types).  Finally, we know we’re going to need to add some producer methods in some way, so this is usually done in reaction to a lifecycle event (we’ll see which one shortly).

Recall that we’re trying to create producer methods dynamically based off a Set of Types obtainable from the Configurations object.  So first, let’s just get that Set of Types.

To do this, we’ll need a Configurations object.  It would be nice to simply ask for one to be injected, but we’re in portable extension land, and that means the container isn’t really up yet.  We’ve pointed it in the direction of the Configurations class (see the code above), so if it were up, we could inject a Configurations object, but, well, we’re out of luck here.  That means programmatic bean lookup, and so that means BeanManager operations.

The BeanManager is the handle you get to the CDI container itself, even while it’s coming up.  Any portable extension can observe a container lifecycle event, and specify a BeanManager-typed second parameter in its observer method.  If it does this, then the BeanManager will be supplied.  Easy.  Armed with a BeanManager, we can call its createInstance() method, and now we have an Instance, which means that now we can use it to select(Configurations.class).  That will give us an Instance, and then we can call its get() method, and we’ll get a CDI-managed Configurations object.

OK, we’ll file that away: we know now how to look up a Configurations object so we can get its Set of Types.  Then, once we have that Set, we have to loop over it and programmatically add producer methods.

If we’re going to programmatically add beans (producer methods are beans), then there’s really only one container lifecycle event that supports that, and that’s AfterBeanDiscovery. This means the container has completed whatever automatic scanning it was set up to do (which may be none), and, in the absence of any portable extension doing anything, is about to start validating things so it can actually start.

In our case, we are going to do something: we’re going to add a bunch of beans!

Let’s start by making our observer method, and in it let’s get a Configurations object:

Now the hard part.

So the AfterBeanDiscovery event would seem to be our savior.  It has the addBean() method, which returns a BeanConfigurator, which, among other things, has a produceWith(Function) method! And the first parameter to that function is an Instance, which could give us any parameters we might otherwise write in a “normal” producer method! So we could just supply an appropriate function, make a few select() calls, and boom, there’s our dynamic producer method!

Alas.

In our case, if we were writing a “normal” producer method, one of the parameters we would need that method to have is an InjectionPoint describing the site of injection for which the producer method will provide satisfaction.  In the function supplied to the produceWith(Function) method—the candidate dynamic producer method—if you try to do instance.select(InjectionPoint.class), you do not get an InjectionPoint that describes the place for which your dynamic producer function will provide values.  You get some weird InjectionPoint that describes something about the Instance object itself, which is of course completely unsuitable.  So produceWith(Function) is out.

Good grief; so now what?

Let’s write a “normal” producer method, just to make some headway, but we’ll just say it returns Object, and we won’t add the @Produces annotation.  We’ll call this our almost-producer method.  We’ll put this method in our extension class:

If we were to add a @Produces annotation here, the container would happily accept this.  The problem is it would only work for injection points where the type was (exactly) Object, which is not what we want:

@Inject
@ConfigurationValue("frobnicationInterval")
private Object frobnicationInterval; // this will work, but…

But in all other ways, this is a suitable producer method—and hence a suitable CDI bean.  To think about this a little differently, all we need to do is to create a CDI bean with a specific bean type that uses this method as its “producer body”.

Fortunately, the BeanManager has two methods here that will help us out greatly.

The first is the createBeanAttributes(AnnotatedMember) method. To understand this, let’s talk briefly about BeanAttributes.

A CDI bean is fundamentally two things:

  1. A piece of descriptive information that says what its types are, what annotations it has, and so on.  A BeanAttributes represents this part.
  2. Some means of producing its instances so that a CDI Context can manage those instances.  There are various ways of representing this part.

As we said above, for each Type available in our Configurations object, we’re trying to “create a CDI bean with a specific bean type”—namely, that Type—that uses a particular method we’ve already written (see above) as a means of creating its instances.

The createBeanAttributes(AnnotatedMember) method, then, basically introspects a producer method (or an “almost-producer” method, in our case), derives information about it and represents that information in a new BeanAttributes.  Sounds good!

Working backwards, we’ll need to get an AnnotatedMethod to represent that almost-producer method we wrote. Then we can call the createBeanAttributes(AnnotatedMember) method on it. The recipe looks like this:

If we inspect that producerAttributes object, we will see that the return value of its getTypes() method only has Object in it (reflecting the return type of our almost-producer method).  If we could somehow this value to become an arbitrary type of our choosing, then we have all the raw materials we need.  Let’s write a very simple delegating BeanAttributes class:

So now we have the means to create a BeanAttributes describing our new bean (the bean describing the kinds of things produced by our almost-producer method), and we have a method that can actually create things of that kind.

To link them together into a real CDI bean, we need to make use of the BeanManager#createBean(BeanAttributes, Class, ProducerFactory) method. We’ll supply this with a DelegatingBeanAttributes instance that supplies the right Type in the return value of its getTypes() method, our extension class, and…wait, what’s a ProducerFactory?

A ProducerFactory is (very briefly put) a programmatic representation of a producer.  Producers in general come in two flavors: producer fields and producer methods.  Recall that producers are beans: things with metadata (types, annotations, etc.) and means of production.  A producer field is a bean whose metadata comes from details about the field itself (its type, its annotations) and whose means of production is, quite simply, the field itself.  Similarly, a producer method is a bean whose metadata comes from details about the method itself and the return value of the method (its type, the method’s annotations) and whose means of production is an invocation of the method.

So what, do we have to write one ourselves?  No, not exactly.  Once again, we can ask the container for the ProducerFactory it would use behind the scenes if our almost-producer method were in fact a producer method.  You do this by calling the BeanManager#getProducerFactory(AnnotatedMethod, Bean) method.

The first parameter: hey, we know how to get that. We actually have it in our hands already, since we needed an AnnotatedMethod for the container to give us a BeanAttributes for it.

The second parameter: this is the bean housing the producer method you’re trying to get the ProducerFactory for. In our case, our almost-producer method is static, so there’s no instance of any object that has to be created so that this almost-producer method can be invoked on it. So we can pass null here.

Let’s pause in all of this and take stock.

  • We’re in a lifecycle method that lets us add things to the container.
  • We have a set of Types that we’ve dynamically discovered that we want to do things with.
  • We have the means of creating a BeanAttributes representing a producer method of an arbitrary type.
  • We have the means of marrying this BeanAttributes together with a “template method” to create a CDI bean of a particular type.

So let’s do it!

I cannot emphasize enough how powerful this is.

We’ve taken a dynamic set of user-supplied information, and set the container up so that it can ensure that typesafe resolution will apply even over this set.

This general pattern I’m sure can be widely applied and I hope to have more to say on it in the future.  Thanks for reading!