CDI Production

Dependency injection involves making things. Here’s a condensed version of how things are made in CDI. You may also be interested in my larger CDI tutorial.

This is a living document that I intend to update frequently. I may break it up into separate posts later.

We’re going to start from the bottom and inside, and work our way up and out. That means we’ll look at the low-level, internal machinery of CDI first, which most users rarely see. Then we will work up and out to annotated classes, from which the low-level constructs are derived, and which are what users create and manipulate.

Finally, terms in CDI can refer to each other, so it is definitely a challenge to present things in order. There will be times where I will ask you to take things on faith, and then will circle back to address them in more detail once the terms I need to talk about them are more firmly in place.

OK, off we go.

Contextuals

At the bottom of CDI you have Contextuals. Contextuals are stateless factories that create and destroy contextual instances. Think of a contextual instance, for now, as a regular Java object, although they can be more complicated than that.

One kind of Contextual is a Bean. A Bean is a Contextual paired with some descriptive and lifecycle-related attributes. We’ll assume that Beans are the only kind of Contextual in the world, so we’ll just talk about them instead.

Beans and Contextual Instances

The lifecycle of any contextual instances that a Bean creates is not under its control. It is instead reified by a Context that the Bean is indirectly affiliated with. A Context doesn’t create or destroy contextual instances, but it controls when they are created by Beans, stores them, and controls when they are destroyed by Beans.

When a Context needs a new instance—because, for example, it has been asked for one and doesn’t have one already stored—it always asks an affiliated, appropriate Bean to create one. This is why an instance created by a Bean and stored by a Context is therefore called a contextual instance. We will discuss why a Context might be asked for a contextual instance, and by whom, soon.

Even though we can’t really fully talk about them, there are some other characteristics of contextual instances that are worth mentioning up front, just to locate them in architectural space. They’re worth mentioning now because it is a Bean‘s create method that is ultimately responsible for them, not some other piece of CDI machinery. Don’t worry if you don’t understand these characteristics yet.

First, a contextual instance may be intercepted or decorated. Because of this, it may be a proxy object—proxying is a common strategy for implementing interception and decoration—or it may not. A Bean does not have to implement decoration or interception at all, but if they are to be implemented, they must be implemented, ultimately, by a Bean‘s create method, in whatever way it chooses. A Bean does not necessarily have to use proxying to implement these features. Don’t worry that we haven’t discussed interception, decoration or what it means for something to be a proxy object yet.

Second, under no circumstances will a contextual instance be a client proxy, a very specific CDI term that we will also cover later (that is sadly unrelated to the possible proxying discussed above). Beans never create client proxies. This will be relevant when we discuss dependency injection in more detail. For now, it is just an interesting fact using a term that has yet to be defined.

Third, a contextual instance is often an instance of a type that depends on other types. A Unicycle interface might depend on a Wheel class, for example: when a new Unicycle implementation of some kind is created (by a Bean), it may need to be supplied with a Wheel object. Acquiring and supplying these objects is, loosely speaking, dependency injection. We can’t fully talk about dependency injection yet, but for now just know that it is a Bean‘s create method’s job to perform it.

So much for Beans and the basics of who calls them and what they do.

Contexts

Rising up and out a level, low-level machinery within CDI can ask a Context for a contextual instance of something. It supplies a Context with a relevant Bean, and asks for an appropriate contextual instance that the Context manages on behalf of that Bean. The Context, in turn, may discover that it does not have an appropriate contextual instance yet, and so might turn around and ask the supplied Bean to create one. A Context never creates contextual instances itself.

A Context can be active or not with respect to a given Thread. There can be only one Context active for a Thread at a time and its activeness can change at any point for any reason.

Scopes

A Context is the partial or full implementation of a notional lifecycle represented by the concept of a scope. This means a Context stores a contextual instance created by a Bean for some period of time, for a given Thread. Or, perhaps it never stores anything but just arranges for contextual instances to be created every time it is asked for them. That is legal too, and in fact CDI depends on the existence of such a Context.

If a Context stores contextual instances, it typically (but, I suppose, not necessarily) does so by affiliating them in storage with the Bean that created them. This means that a Bean is both a mechanism for creating a contextual instance, and, often, a key to identify that contextual instance later.

As noted above, a Context implements part, or all, of a scope. A scope is a concept representing a notional lifecycle and is associated with one or more Contexts. Sometimes people talk about scopes and Contexts as if they are the same thing. They are not, since a scope may be implemented by one or more Contexts. Finally, a scope is notionally identified by its scope type, which is another notion that exists only to identify its scope. A scope may be identified by only one scope type and a scope type identifies only one scope.

A scope type is reified by a Java annotation type that itself must be annotated with either jakarta.inject.Scope or jakarta.enterprise.context.NormalScope. A scope is said to be a normal scope if its reifying annotation type is annotated with jakarta.enterprise.context.NormalScope, and a pseudo scope otherwise. (There are ways in CDI to programmatically register other kinds of arbitrary annotation types as scopes, but for now we’ll ignore those.) We will talk about scopes—particularly normal scopes—more fully in a little bit.

A Context has a method that can retrieve its associated scope type.

Linking Scopes, Contexts, Beans and Contextual Instances

Low-level machinery within CDI often finds itself armed with just a Bean and the need to get a contextual instance that a given Bean can create. To do this, it does not ask the Bean to create such an instance directly. If it did, it would bypass the useful lifecycle management machinery that Contexts implement.

Instead, it asks the Bean for its scope type—something every Bean has as part of its descriptive and lifecycle-related attributes—and then asks other low-level CDI machinery for all the Contexts that also have that scope annotation type and thus collectively implement the corresponding scope (usually there’s just one). From the resulting set of Contexts, it gets the one that is active for the current Thread. Then it passes the Bean to the Context and asks for a corresponding contextual instance. The contextual instance that is returned may have been a cached one or a new one; the caller does not know.

So much for the basics of Contexts and how they interact with Beans.

Normal and Pseudo Scopes

Rising up and out another level, recall that a scope may be a normal scope or a pseudo scope. A normal scope is, loosely speaking, one whose contract guarantees the user that any objects she sees whose types logically belong to that scope will be suitable no matter where they are used.

For example, consider a normal scope representing a request-oriented lifecycle, and a caller who takes delivery of an object whose type logically belongs to that request scope. Wherever in her system that object shows up, regardless of what she does with it, it should be the right object, i.e. the one for the current request, or should be invalid in some way if there is no current request, and so on. So she can’t just take delivery of an ordinary contextual instance, because an object can’t replace itself with another in some magic way.

To help with this, if a Context indicates that it implements a normal scope (remember, a Context can supply its scope annotation type, as can the Beans that make the contextual instances it manages), then a CDI implementation must do some prescribed things to ensure the normal scope contract is honored.

First, from the user’s perspective, if she receives a Unicycle object from as-yet-unspecified machinery that plays by the normal scope contract rules, that Unicycle object must behave like a “real” Unicycle object. Methods that she can invoke on a “real” Unicycle object must work as expected. instanceof tests must succeed. And so on.

Second, when she invokes a method on her received Unicycle object, it must, because of normal scope rules, somehow just-in-time find the “real” underlying Unicycle that is appropriate for the usage. In CDI, the “real” underlying Unicycle in this usage scenario will always be a contextual instance definitionally managed by a Context partially implementing the normal scope in question. We have seen how a Context supplies contextual instances, so we can see a glimmer of a way that this might be implemented generally. Then the method in question that the user invoked must arrange for this “real” Unicycle contextual instance that was found to receive the same method invocation in such a way that the user is none the wiser.

This is a classic example of proxying. A proxy in this general sense is a wrapper that (a) behaves as if it were a “real” object itself but actually (b) forwards all calls it receives to the “real” object it notionally wraps. Typically embedded within its innards is a strategy for finding the “real” object just-in-time (or perhaps it requires the real object to be supplied to it initially) and re-invoking on the “real” object whatever method the user invoked on it. Here, our user has taken delivery of a proxy that behaves just like a Unicycle but looks up the “real” Unicycle contextual instance just-in-time under the covers when a method is called on the proxy.

Client Proxies and Contextual References

A proxy object like this that implements these rules of CDI’s normal scopes is a very specific kind of proxy. It is known in CDI as a client proxy. A user who takes delivery of objects from upper-level CDI machinery receives client proxies when the lower-level machinery recognizes the Bean in play is affiliated with a normal scope. (There are some other edge cases where client proxies might also be involved, but we won’t cover them here.)

A client proxy always wraps, or delegates to, a contextual instance served up (definitionally) by a Context. A client proxy never wraps, or delegates to, another client proxy. As we’ve seen earlier, a Bean never creates a client proxy. We haven’t yet talked about what does create a client proxy, where exactly they come from, how long they live, how you might use one or detect that you are using one, or any of that, but we will soon.

A client proxy is one kind of two kinds of contextual reference. A contextual reference is either a client proxy or a contextual instance stored and supplied by a Context implementing a pseudo scope. Ordinary users doing ordinary things always interact with contextual references and ideally don’t know whether they are client proxies or contextual instances.

A caller can ask a BeanManager for a contextual reference. The getReference(Bean,Type,CreationalContext) method accepts:

  • a Bean (so that Bean can ultimately be supplied to a Context when contextual instances need to be retrieved)
  • a Type representing how that contextual reference will be used and a type that the contextual reference must implement
  • a CreationalContext which is not relevant for our purposes yet

(Additionally, the Type needs to designate a kind of contextual instance the Bean in question can create.)

The BeanManager will ensure that the right kind of contextual reference—properly implementing at least the supplied Type—will be returned. If the scope governing the lifecycle of the kind of reference to be returned is a normal scope, then the contextual reference that is returned will be a client proxy (that knows how to get the right kind of contextual instance when needed, and that forwards method calls to it transparently). If instead the scope is a pseudo scope, then the contextual reference that is returned will be a contextual instance. (Recall from earlier that the contextual instance may still be another kind of proxy, e.g. to handle interception and decoration, but it will never be a client proxy.)

Client Proxy Creation and Rules

When the contextual reference that is returned is a client proxy, CDI does not dictate how a CDI implementation must implement it, but does dictate precisely how such a client proxy must behave. It also imposes some restrictions on proxiable types to make it easier for CDI implementations to build it.

First, any Type that the client proxy must implement can’t denote a final class. This makes sense because one common way of implementing a proxy for a type is to extend the type in question and override its methods to find the “real” object and dispatch the relevant method invocation to it. Obviously if the methods of a given class are final, a CDI implementation cannot build a client proxy that overrides them.

Second, any Type that the client proxy must implement that is a Class (as opposed to an interface) must have a zero-argument constructor. This is so that a CDI implementation can actually create an instance of the class to be proxied, so that it can return it to the caller, and so that it is at least possible to do using something as simple as Class::getDeclaredConstructor. Remember, the client proxy itself is not a contextual instance. Rather, it is an object that wraps the machinery to find one and to delegate method invocations to it. It is not therefore managed by a Context. It is created “by hand” by the CDI implementation when needed. The constructor is allowed to be public, protected or package-level.

Some CDI implementations use proprietary methods to avoid even this restriction, but that behavior is not portable.

The most common strategy employed by CDI implementations to create client proxy classes is to use just-in-time bytecode generation. With this strategy, a CDI implementation typically has a repeatable algorithm to generate a proxy class name given the name of a “real” class and Bean information. If that generated class is not already found, a bytecode generation library such as ASM is used to generate a class with that name that extends the “real” class (or that implements the “real” interface) and plays by very specific rules.

Once the client proxy class is generated, it is instantiated in whatever way the CDI implementation wants to instantiate it. After all, it generated the code! Weld chooses to do this in a very flexible manner that I’ve written about previously. (The simplest way for a CDI implementation to do this is of course to simply invoke a zero-argument, non-private constructor reflectively, as noted above.)

Once this client proxy extending or implementing at least this particular Type is created, it is basically reusable, so most implementations choose to stash it away somewhere so that all this code generation and instantiation doesn’t have to happen again.

The client proxy so created now has to obey the rules that govern all client proxies. Specifically, it must, for every business method invoked on it:

  • Get the relevant contextual instance (the “real” object). We’ve seen how that works in general.
    • This can work here because at the time that the BeanManager::getReference call is made, and the client proxy class is generated, the Bean in question is supplied, so the generated client proxy code has access to the Bean‘s information and can save it away (Beans are immutable). This means it can find the proper Context from which to acquire the right kind of contextual instance because it can ask the Bean for its scope annotation, and can then ask a BeanManager for a corresponding active Context.
    • We also know that it is the responsibility of a Bean to perform dependency injection, interception and decoration (if applicable), though we still haven’t delved into those topics yet. So when the contextual instance is “loaded” by the innards of the client proxy, it is fully ready for business.
  • Invoke the method in question on the contextual instance. There’s no further processing that happens.

You can see from these (very simple) rules that any logic related to interception, decoration and dependency injection must be located in the Bean‘s create method, as noted several times above. That is, client proxies are quite straightforward and just dispatch method calls. They doesn’t do any interception or further proxying themselves.

So much for contextual references and one of their subtypes (client proxies). The main takeaway is: If a user calls BeanManager::getReference, all the hard stuff is handled for her.

Acquiring Contextual References in Beans

Let’s circle back all the way down to the lowest level again and look, again, at the lowly Bean and, this time, at its create method. We’ll do this because, recall, dependency injection must be performed (if it is to be performed at all) in this method, and to “do” dependency injection in CDI a Bean will need to acquire contextual references (the dependencies to inject), which we’ve just learned about.

As we’ve seen, the Bean::create method can do whatever it wants to make a contextual instance. We’ve also seen that there is a method on BeanManager that allows the acquisition of contextual references for a given Bean and Type.

It doesn’t matter who is authoring a Bean implementation. It could be an ordinary user, who is using CDI’s portable extension facilities to install that Bean into the CDI implementation, or it could be the CDI implementation itself, when everything starts up and it inspects annotated classes for relevant annotations and creates Bean objects to represent them. All Beans in the system use their create methods to create contextual instances.

Usually a Bean can get access to a BeanManager from within its create method. For example, when the CDI implementation itself is creating a Bean object, often it installs a BeanManager into its Bean implementation. Or in a custom Bean put together in a portable extension, often a BeanManager is available as a “reachable” object in the enclosing portable extension method. Regardless, what is important is that it is relatively trivial for a Bean to get its hands on a BeanManager inside its create method.

Let’s consider a Bean implementation that a CDI implementation builds, i.e. not a custom Bean installed via an end-user portable extension, but one built as part of the CDI implementation itself to represent an annotated class. Let’s say the Bean in question is the one that was creating Unicycle instances in our earlier example.

With just the tools covered above, even if the Unicycle class has a constructor that takes a Wheel object, you can see—perhaps faintly—that using the BeanManager the CDI implementation vendor employee implementing this Bean create method can:

  • first call BeanManager::getReference on the available BeanManager and pass in Wheel.class as the Type. Assuming she can also use that BeanManager to find a Bean that makes Wheel-typed contextual instances, she can pass that Bean too to the getReference call, and she’ll get back a Wheel-typed contextual reference as we’ve discussed above, that she knows will be suitable for this usage point, regardless of what scope the Wheel may belong to
  • then call the ordinary constructor on the Unicycle class that takes a Wheel object and supply it with the contextual reference she just acquired
  • then return the plain simple Unicycle object as-is (assuming, for simplicity, that interception and decoration are no applicable here).

Note that the only slightly odd thing this user has to do is acquire a contextual reference to a Wheel so that she can then stuff it in the Unicycle constructor. She didn’t have to generate any bytecode or do anything magic with proxies or any of that. If some other user now asks a BeanManager for a contextual reference implementing the Unicycle type, they’ll get a contextual reference to this Unicycle contextual instance. If the scope in question for the Bean in question is a normal scope, then automatically the contextual reference they receive will be a client proxy.

More to come.

Weld and Client Proxy Creation

(Taking a break from my blog post series in progress to write down stuff that I stepped through today on how Weld creates client proxies.)

The CDI specification says that if you have a managed bean in a normal scope (think: class with something like @ApplicationScoped or @RequestScoped on it), it must have a non-private, zero-argument constructor (and must not be final) so that a CDI implementation can proxy it.

You may have noticed when using Weld’s implementation of CDI SE that this seems not to be required. I can do this:

@ApplicationScoped // normal scope
public class B {
  private B() {} // hmm, seems to violate specification
  @Override public String toString() { return "B"; }
}

…and can inject an instance of that wherever I like:

@Dependent
public class A {
  @Inject
  public A(final B b) { // hmm; how does CDI/Weld make b?
    super();
    System.out.println(b); // you'll see "B" on the console
  }
}

Here is how that works.

When Weld’s CDI SE implementation starts up, it looks for a configuration item that indicates relaxed construction. This can be supplied in a few different ways, but the easiest way to supply it is by setting the org.jboss.weld.construction.relaxed System property to a textual representation of a boolean value (i.e. “true” or “false“). In Weld’s implementation of CDI SE, if you do nothing, the value of this configuration item is effectively true. In Weld’s implementation of CDI as found in application servers, the value of this configuration item is effectively false. This is worth noting.

First, the easy path: if for whatever reason relaxed construction is not enabled, then we stop here. My example above will fail and Weld will correctly tell you that it has no way to create a B instance because B is “unproxyable [sic]” according to the rules laid out by the specification.

Let’s assume that relaxed construction is enabled. Weld begins by looking for a ProxyInstantiator implementation:

https://github.com/weld/core/blob/be7382b01c4a56c54f92873c1c2ebf0445714bfe/impl/src/main/java/org/jboss/weld/bootstrap/WeldStartup.java#L335

That causes the create method to be called on the ProxyInstantiator.Factory class with access to the configuration subsystem:

https://github.com/weld/core/blob/151e1fedcc16d6d2dfec3ecdf1c095f75fdd995d/impl/src/main/java/org/jboss/weld/bean/proxy/ProxyInstantiator.java#L128-L129

The create method begins by assuming that the ProxyInstantiator that will be used is the DefaultProxyInstantiator:

https://github.com/weld/core/blob/151e1fedcc16d6d2dfec3ecdf1c095f75fdd995d/impl/src/main/java/org/jboss/weld/bean/proxy/ProxyInstantiator.java#L90

Then, if relaxed construction is enabled (which it is in this example), Weld will try two other hard-coded implementations in order, using the first “valid” one (we’ll see what that means shortly):

https://github.com/weld/core/blob/151e1fedcc16d6d2dfec3ecdf1c095f75fdd995d/impl/src/main/java/org/jboss/weld/bean/proxy/ProxyInstantiator.java#L91-L103

The first of these implementations is the UnsafeProxyInstantiator, whose instantiation strategy is to use the sun.misc.Unsafe class (redirected in modern JDKs to the jdk.internal.misc.Unsafe class) to create an instance of a class without using constructors at all:

https://github.com/weld/core/blob/151e1fedcc16d6d2dfec3ecdf1c095f75fdd995d/impl/src/main/java/org/jboss/weld/bean/proxy/UnsafeProxyInstantiator.java#L47-L49

This is worth noting because you might be logging proxy instantiation inside your zero-argument constructor, but if this strategy is selected for whatever reason, your constructor won’t be called. I’ve personally been burned by this and have now seen others burned by it as well.

If that UnsafeProxyInstantiator class is not available or can’t be used for any reason, then a second non-standard ProxyInstantiator implementation is tried instead, which uses sun.reflect.ReflectionFactory under the covers (which in modern JDKs is sort of redirected to jdk.internal.reflect.ReflectionFactory). This class will happily use a private zero-argument constructor:

https://github.com/weld/core/blob/151e1fedcc16d6d2dfec3ecdf1c095f75fdd995d/impl/src/main/java/org/jboss/weld/bean/proxy/ReflectionFactoryProxyInstantiator.java#L52-L60

(In this case of course your private zero-argument constructor will be called so any logging you do in there will show up.)

(You can see how it does this here:)

Finally, if neither of these non-standard instantiation strategies works, then the already-constructed DefaultProxyInstantiator is used instead, which does what you think it does, and adheres to the standard:

https://github.com/weld/core/blob/be7382b01c4a56c54f92873c1c2ebf0445714bfe/impl/src/main/java/org/jboss/weld/bean/proxy/DefaultProxyInstantiator.java#L42-L44

That is how the proxy object itself is created. Note that this does not create the actual underlying instance. For that, a private constructor is just fine (in Weld’s CDI implementations, anyway).

Note also that the underlying instance is not created until a business method on the proxy is invoked. Note as well that any method defined by java.lang.Object, other than toString(), is not considered a business method.

Hopefully this helps someone!

On Portability

I’m primarily (as always) talking Java, here. This post has a larger purpose but I’m not there yet. If all goes according to plan this post will make sense with some others to follow. At the moment it probably doesn’t make much sense. If that’s your thing, read on. You may be interested in the prior post in this series.

What is portability?

“Portable” just means capable of being carried. If you can pick it up and put it down, it’s portable. If you can take it from one “place” to another, it’s portable. My knapsack is portable. My piano is not, at least by me alone. Nor, really, is my crushing sense of self-doubt, but that’s another story.

Unless you’re just waving your hands, in computers and software when you’re talking about portability you have to talk about where the carrying is happening. Usually the word “across” or “between” is involved: A program might be portable across operating systems; a framework extension might be portable across different implementations of the framework; and so on.

In computers and software, we also usually add concepts of functionality and immutability to this. An application or a binary or a script is portable across computers or environments if, when you pick it up from one computer or environment and put it down on or into another computer or environment without changing it, it still works or can work. A pure Java application is portable across operating systems (or should be) because assuming you have java lying about at the destination you can pick up your CatsLOL.class file from a Windows computer and put it down on a Linux computer and run it in the same way without changing it. A binary resulting from a compiled and linked C program may not be (and usually is not) portable from one operating system to another.

A software component (like a library or a jar file or an individual Java class that is not a program) is also portable, even if you’re not switching operating systems or languages. You can pick a component up and put it (or a copy of it) down in multiple applications and incorporate it that way. In some sense you have “carried” it from wherever it was to several different destinations without changing it. This can happen even if you leave it in place: dynamic loading of libraries and classes is kind of a form of carrying, if you look at it right; the program doing the dynamic loading imports the library or class, thus notionally carrying it from one place to its own address space. “Reusability” is another (awful) word for this, along with other real winners like “composability”.

There are other sorts of more abstract things (let’s restrict ourselves to the software industry) that can be carried from one “place” to another and used without modification. If I leave one employer and go to another, I take my brain (hopefully) and experience with me, or at least the parts that are not signed away to the former employer somewhere. Publicly available stuff I learned from books, videos, websites, reference manuals and even certain source code may be portable from one work environment to another and I may be able to get up to speed more quickly as a result.

Things can be more or less portable. Sometimes something is 100% portable provided that the new environment it is being carried to is juuuuuuust right. If it is, then you put the thing down, it plugs into the new environment and runs exactly the same way as it did in the old environment. A pure Java program is a good example of this. A Java program that relies on a native library, by contrast, may find in the new environment that the environment-specific native library it needs for that environment is missing. If that library is put in the right place, then everything works. Another Java-centric example is: a Java framework extension or participant may be more or less portable depending on which features of the framework it uses or extends and how likely those features are to exist across the environments the Java framework extension or participant might be ported to. Then, even more abstractly, my knowledge of the JAX-RS API is fully portable from one job to another to the extent that the new job mandates proper use of the JAX-RS API. My knowledge of C++, on the other hand, probably isn’t very portable from one job to another because C++ permits lots of flavors and styles and maybe the old job and new job feature completely different styles of C++ programming. Also I’m joking about my knowledge of C++. So is everyone else.

To talk about software portability, particularly application portability, you often have to talk about platforms and platform implementations, because often what you’re really saying is that a given application is portable across a given platform’s multiple implementations. So then: an application is fully portable across platform implementations to the extent that there is more than one implementation of that platform and it doesn’t matter which platform implementation you pick to run it. Pick it up from one platform implementation; put it down in another: did it run? It’s portable! Congratulations! I probably didn’t write it.

So is a Jakarta EE application fully portable? It can be. If your application follows the platform specification, then you know it will (at least theoretically) run on platform implementations A and B in exactly the same way. If it uses features from one platform implementation, then you cannot necessarily pick it up from platform implementation A and run it unchanged in platform B, because platform B might not have those features.

Is a CDI SE application fully portable? This is sort of a nonsensical question, because CDI SE is not really a platform, but a framework. Regarding application portability, therefore, the answer is no. Now, certainly a CDI SE component (an extension, a bean, etc.) can be portable between CDI SE implementations and can be reused in various CDI SE applications: it can be picked up and carried from one CDI SE program and repackaged into another CDI SE program. If it uses Weld APIs, though, for example, then it is not fully portable across CDI SE implementations (like OpenWebBeans).

Is a Spring application fully portable? Yes and no. A Spring application packaged as a fat jar is just a Java program, so yes, you can port it from one Java environment to another, but given Java’s program portability promises this is almost tautological. Or, if you like: there aren’t two implementations of the Spring platform. From that perspective, therefore, a Spring application isn’t portable because there’s nothing to port it to. A Spring program packaged as a .war file, on the other hand, could conceivably be fully portable across Jakarta EE platform implementations provided that it carries all of its baggage with it (i.e. a Jakarta EE server will almost certainly not have any Spring libraries preinstalled). At this point, though, it just collapses into being a Jakarta EE application, so see above.

Is a DropWizard application portable? No. There’s nothing to port it to. There aren’t two implementations of a hypothetical DropWizard platform.

Is a Java application portable? Well, yes, but trivially so, and at a different sort of level. You can indeed run a Java program on different operating systems, and Java is a platform, so therefore a Java program is portable across operating systems. But given that we’re talking about portability across platforms, this foundational level of portability isn’t very interesting for this article.

Is a MicroProfile application portable? No, because there is no such thing as a MicroProfile platform, so there’s nothing to port it to. There are things that use MicroProfile APIs and even implement them but there’s no standard way to make some kind of hypothetical MicroProfile application and somehow run it in all of them.

Is an arbitrary binary portable? No; we’ve already covered that above.

If I make a binary using GraalVM’s native image facility, is the resulting binary portable? No, for the same reasons.

Is a Quarkus application portable? No; it’s just a binary. There’s nothing to port it to.

Is a Helidon SE application portable? No; it’s just a Java program that uses some libraries. There’s nothing to port it to.

Is a Helidon MP application portable? No; it too is just a Java program that uses some libraries, some of which happen to be partially specified. There’s nothing to port it to.

Is an OpenLiberty application portable? To the extent that it is a Jakarta EE application, yes; to the extent that it is not, no.

Is a Payara application portable? Same answer: to the extent that it is a Jakarta EE application, yes; to the extent that it is not, no.

Is an Oracle WebLogic Server application portable? Same answer.

OK, there are a lot of “no”s above. That’s not to say component and knowledge portability isn’t in play across the board. Some arbitrary examples:

  • A CDI component can be portable between CDI SE-, Helidon MP-, MicroProfile- and Jakarta EE-based applications
  • A Spring component can be portable between Spring applications
  • A JAX-RS resource class can be portable between DropWizard, Helidon MP, MicroProfile- and Jakarta EE-based applications
  • A component that uses org.eclipse.microprofile.config.Config is portable to any library or application that has MicroProfile Config available to it

…and so on.

More to come that may tie back to this article.

On Platforms

I’m primarily (as always) talking Java, here. This post has a larger purpose but I’m not there yet. If all goes according to plan this post will make sense with some others to follow. At the moment it probably doesn’t make much sense. If that’s your thing, read on. You may also be interested in the next post in this series.

What is a platform?

If I have one library with one class in it, with one function, do I have a platform? What about two functions? Two classes? Two libraries?

In my opinion, it depends on what it does.

Maybe I’m naïve, but I think a platform has to have a definition of what a packaged application looks like, a way to create such a packaged application (or at least verbiage about how to do it), and a way to run a packaged application that is handed to it. It may or may not also include a framework or libraries that the application may or must use.

You write and package an application to run on a platform, optionally (or not) using the platform’s facilities, and the application and the platform are distinct entities and have notionally different lifespans (that may of course nevertheless line up).

How the packaged application is expected to look and how it runs is dictated by the platform. Also, in most cases, but I suppose not all, a platform dictates how the application must look but not how it is produced.

So Docker, for example, is a platform, because, among other things, it tells you what a packaged application is, one way among several to package your application (docker build) and how to run it (docker run). Docker itself doesn’t really include any kind of framework that your application would use: your application is responsible, wholly, for doing whatever it wants to do. Here you can start to argue semantics about things like volume mounts and whatnot if you so choose, and if you do that I will start checking my phone and waiting for you to leave.

Is Kubernetes a platform? Unquestionably, it seems to me, by the same logic. (Also, anything with that many metric tons of YAML is probably a platform by fiat, or at least a ZIP code.)

Jakarta EE (and Java EE before it) is a platform because it tells you how to package your application (mvn {war plugin incantations}) such that a very specific ZIP file with a very specific filename suffix is created, or at least what an application so packaged must look like, and how to run (deploy) it. Well, hmm; no, it doesn’t really tell you how to run such an application, although it does say that any compliant Jakarta EE implementation must allow the user to perform a deploy operation on a standard packaged application. So more accurately: any given Jakarta EE implementation is a platform: you hand it a standard application, and its tooling can run that application.

Is CDI SE a platform? No. It is a framework. A framework is a library that, should it be poked through some other means, will arrange for your framework extension or participant to be called back (and hence started or poked, whatever that might mean, as well). CDI SE will look for certain classpath resources, but that’s not really packaging, exactly, and you can turn it off anyway, so ultimately you are the one doing the starting or poking of the framework and telling it what, in turn, to start or poke itself. Once you take care of that one way or another, your CDI SE framework extension or participant (your application) will do whatever the framework says it should do. So: framework yes, platform no.

Is Spring a platform? Yes and no. It is probably fundamentally a framework by the same logic, plus a collection of useful libraries you can use outside of the framework, plus some other stuff, plus a grand piano, some lead shot, the kitchen sink, a bank safe, and the fluff found in your front pockets. It also however does define a packaging format: if you build your Spring application a certain way, an executable archive pops out the other end that can be run by the Java platform (which also relies on a very complicated custom classloader embedded into the tail end of the ZIP-formatted archive by Spring!) or by a suitable Jakarta EE web container. So in that sense that part of Spring is a platform. On the other hand it leans on Java itself to do the heavy lifting if you execute one of its fat (and I do mean fat!) jars, so the recognizing that there is a Main-Class that has to be executed is done by Java itself, not by Spring.

Is DropWizard a platform? No, not really. DropWizard, like Spring, is a collection of libraries it defines and, unlike Spring, those from other places. It recommends, but does not define, various packaging formats. It wants you to run your program as a plain old Java program. Notably, DropWizard, while strictly speaking is probably a framework, delegates to other frameworks (like Jetty) to do the heavy lifting. Bottom line: not a platform, but a collection of useful libraries and frameworks.

Is Java a platform? Yes. It defines how you package your application and how such packaged applications are run. Using the same command line tool (java) you can run several different applications (executable jar files, exploded class directories, JPMS modules) depending on what you supply to the tool. (Also Java is obviously a collection of useful libraries!)

So is Java running a program that is a Jakarta EE implementation that deploys a Jakarta EE application two platforms? Yes, but in that picture presumably only the Jakarta EE implementation would be the platform of interest.

Is MicroProfile a platform? No. It is a framework as currently constituted, kind of. Actually, it’s not even really that, since there’s no entry point to a MicroProfile application, so no real defined way to poke it or start it. Like DropWizard, it relies (exclusively) on other frameworks (like JAX-RS) to do the heavy lifting, which do define such entry points. Nor is there a notion of a MicroProfile application that you start in any way. Nor is there a notion of deployment. Nor is there any common way to package a MicroProfile application even if there were a notion of what one is. Instead, I suppose it is a collection of useful libraries, and in some cases only parts of useful libraries, where the other parts are off-limits in some currently unspecified way. To use the language of Jakarta EE, it is, as currently constituted, “little more than bundles of APIs with few or no tie-ins” to a (nonexistent) larger platform. Maybe this will change.

If I have some arbitrary executable binary in my hand and I ask my computer to run it, is there a platform involved? Of course at some level there is, but I personally stop here, so therefore, dear reader, you will too: yes, a Unix system is a platform that runs, say, ELF executables, but in that parlance I guess I would be interested only in ELF executables that run other packaged applications. So, for this discussion, no, if I have an arbitrary executable (such as a compiled and linked C program) that just runs and does its thing, it is not a platform.

If I make a binary using GraalVM’s native image facility, is the resulting binary a platform? No (unless I’ve deliberately created a binary to run well-defined packaged applications of some kind, of course). See the logic above.

So is Quarkus a platform? No, because it just results in single purpose binaries. Quarkus is, really, a kind of compiler, I guess.

Is Helidon SE a platform? No; it’s a collection of libraries.

Is Helidon MP a platform? No; it’s a MicroProfile implementation and you’ll see above that MicroProfile isn’t a platform.

Is OpenLiberty a platform? Weirdly, although they define a runtime that can run well-defined packaged applications, they define themselves as a framework. I disagree; they are a platform: you deploy a well-defined packaged application to a simple server. Platform all the way.

Is Payara a platform? Yes. Depending on the product in question, it is either a Jakarta EE implementation of some kind (and hence a platform; see above) or defines a way of running applications (it is a MicroProfile implementation, and although as noted above MicroProfile is not a platform a runtime that defines a packaging format that happens to use MicroProfile’s useful collection of libraries is thus itself a platform).

Is Oracle WebLogic Server a platform? Yes; among many other things it is a Jakarta EE implementation.

More to come that may tie back to this article.

CreationalContext Observations

Here are some random observations concerning CreationalContext, a funky little architecturally polluting blemish on the surface of CDI’s otherwise pretty good set of APIs. (I’ve written before on this little nugget.)

There is no documentation that says what a CreationalContext is. The class javadoc reads, in total:

Provides operations that are used by the Contextual implementation during instance creation and destruction.

So its purpose is exactly that of its two operations, one of which (push()) can be properly implemented as a no-op as we’ll see below. That means its purpose is solely to house the release() method.

To portably create a CreationalContext, you use BeanManager#createCreationalContext(Contextual). For the purposes of destroying dependent objects, which is the interface’s sole documented purpose, the supplied Contextual is never used.

A CreationalContext is architecturally tightly coupled to a Context implementation for the Dependent scope. If you implement one, you have to implement the other because there is no portable way for an arbitrary Context implementing the Dependent scope to indicate to a CreationalContext that a dependent object needs to be tracked for subsequent destruction by the release() method. But in Weld you cannot supply your own instance of a Context for the Dependent scope, because the Weld-supplied one is always active, and there can be at most one active Context for a scope, and there is no way to remove a Context. So therefore you cannot supply your own implementation of CreationalContext in Weld unless you couple it to Weld interfaces and abstract classes…in which case why are you supplying one in the first place?

Weld implements CreationalContext by constructing a tree of them: each one tracks dependent objects added by its child. This means that Weld’s CreationalContext implementation is also tightly coupled to Weld’s implementation of BeanManager: every time a contextual reference is acquired, whether via injection or programmatically, a new “child” CreationalContext is created. This tree structure is not necessarily needed to perform dependent object cleanup (since, for example, OpenWebBeans implements CreationalContext without such a tree structure). The result is that in Weld many CreationalContextImpl objects get created that do nothing.

push() and release() have nothing to do with each other. In fact you can pass the TCK by implementing push() as a no-op. Many developers you talk to think that these methods are related. Almost nobody knows how to use them properly. You are, it turns out, supposed to always (probably within a finally block) call release() as the last thing you do in a custom bean’s destroy() method. Otherwise it is possible that your program will leak memory.

release() means, simply, “destroy dependent objects tracked by this CreationalContext“. Of course it may not be exactly this CreationalContext, because it might be a tree of such CreationalContexts. Or maybe it’s “destroy all dependent objects reachable from the creation of whatever it was that caused this CreationalContext to come into existence”. No specification language indicates whether release() must be idempotent. Obviously it would sure be nice if it were, so CDI implementations tend to make it so.

Remember that according to the specification a CDI implementation can destroy an unreferenced dependent object at any point by any means for any reason, so strictly speaking release() isn’t really a method that should have ended up in the specification (it’s an implementation detail). It’s clearly convenient so maybe that’s why it ended up in here.

The only time you need a CreationalContext is when you know that a contextual instance is going to be created. If you know that a contextual instance already exists, then the CreationalContext will never be used by the Context#get(Contextual, CreationalContext) method.

I often wonder why instead of this strange API there wasn’t a DependentContext interface, extending Context, that would allow you to add and destroy dependent object hierarchies, since we already know that Dependent is a special scope. There’s probably a good reason but I can’t think of what it is at the moment.

CreationalContext Deep Dive

What is a CreationalContext in CDI?

A perfect example of how naming things is the hardest problem in software engineering, really. The only method that it exposes that anyone should really be concerned with, release(), is used at destruction time, and has nothing to do with creating anything.

Here’s how I would describe it:

A CreationalContext is a bean helper that automatically stores @Dependent-scoped contextual instances on behalf of some other bean that has references to them, and ensures that they are cleaned up when that bean goes out of scope.

That’s basically it.

Consider a bean, B, that has a reference to a @Dependent-scoped bean, D. In Java terms, it might have a field like this:

@Inject
private D d;

Now, if the Bean implementation that created this B contextual instance has its destroy(T, CreationalContext<T>) method called, then B will be destroyed. B, in other words, will be the first argument.

When B is destroyed, you want to make sure that its dependent objects are also destroyed. But you also don’t want to burden the programmer with this. That is, you don’t want to make the programmer have to jump through some hoops inside B‘s logic somewhere to say “oh, when I’m destroyed, make sure to arrange for destruction to happen on my d field”. That should just happen automatically.

To allow this to happen automatically, CDI locates this logic inside a CreationalContext (yes, a “creational” context, even though we’re doing destruction. Naming is hard.). At the moment a CreationalContext is released, it has “in” it:

  • A bean that it is helping (usually)
  • A set of dependent instances that belong to the bean that it is helping
  • For each of those dependent instances some way to tie it to the Contextual (the Bean) that created it

When release() is called, the CreationalContext iterates over that set of dependent instances and their associated Beans and calls destroy() on each of those Beans.

So the programmer didn’t have to do any of this work. She just @Injected a D into a field and even if D is some sort of custom object it gets destroyed properly.

(It all follows from this that release() implementations must be idempotent and pretty much should never be called except from within a destroy() method of a custom bean. They also must work on only the bean that is being helped, and not on every dependent object “known” to the CreationalContext.)

OK, that’s all fine, but how did all these objects get “into” the CreationalContext in the first place?

When B was created, via a Bean‘s create(CreationalContext<T>) method, the container supplied that method with a new, empty CreationalContext that is associated with the Bean doing the creating. That is, prior to the create call, the container called beanManager.createCreationalContext(beanThatIsCreatingBInstances), and the resulting CreationalContext is supplied to beanThatIsCreatingBInstances‘s create method as its sole argument.

What does a custom Bean author here need to do with this CreationalContext as she implements the create method? The answer is: ignore it completely. That’s easy.

(More to the point: push(Object) does not, as you might be tempted to believe, stuff a @Dependent-scoped object into the CreationalContext such that release() will have any effect on it. The two methods are completely orthogonal. Around this point you should start getting suspicious: how does a dependent object get “into” an arbitrary CreationalContext anyway? An excellent question.)

In the case of managed beans—ordinary CDI beans, with @Inject annotations and whatnot processed by the container without any special funny business—remember that the container will take care of satisfying the injection points. So in the case of B with a D-typed d field injection point, the container will arrange for a D-type-producing bean to be invoked and then will automatically arrange for that dependent object to be stuffed into the CreationalContext.

That’s a lot to take in. Let’s try to break it down.

Recall that the container created a brand new CreationalContext to serve as the bean helper for B when it is about to call B‘s create method.

In order to “make” a B, the container is going to have to satisfy its D-typed injection point (the d field in our example).

To satisfy the D-typed injection point, the container will need to find out what scope is in effect. It will discover that the scope is @Dependent (since our example says so; presumably D is annotated with @Dependent which allows the container to call BeanManager#getContext(Class<? extends Annotation>)).

With the right Context in hand, the container will ask it for an appropriate D instance. The method that the container uses here is Context#get(Contextual<T>, CreationalContext<T>). Here, the container does not create a new CreationalContext. It passes the CreationalContext it has made for creating the B instance. That’s important.

A Context is responsible for doing basically whatever it wants to return an instance, so long as if a new instance is required it results (ultimately) from the return value of Contextual#create(CreationalContext<T>).

The @Dependent-scoped Context is obliged to create a new instance with every request, so it will dutifully invoke Contextual#create(CreationalContext<T>) and get back a new D instance. (If we pretend for just a moment that D is made by some kind of custom bean, the custom bean author never had to touch the CreationalContext when she implemented the create method. She probably just returned new D() or something similar.)

OK, so now the Context has possession of a new D object. But before it hands it back to the caller, it is going to stuff it in the supplied CreationalContext as a dependent instance. After all, the @Dependent-scoped Context always produces dependent objects that are tied to some “higher-order” bean, and we know that this is what CreationalContext instances are for: to store such dependent objects together with their referencing bean.

So it’s fine to say all this, but how does the @Dependent-scoped Context implementation actually add a dependent object to the CreationalContext? We’ve already seen the push(Object) method is not for this purpose.

It does it via proprietary means.

Weld, for example, does it via casting. This can get interesting since a user can supply her own @Dependent-scoped Context implementation: in order to add dependent objects herself she must tie her Context implementation to Weld.

You might think you could have your own @Dependent-scoped Context implementation that arranges for a CreationalContext to be used as a key into a Map of this kind of state. Then you wouldn’t be bound to a particular implementation of CDI. But of course if someone calls release() on a CreationalContext, you would have to somehow arrange to be notified of such a call, and that’s impossible.

So the upshot is that the CDI vendor, who returns CreationalContext implementations from BeanManager#createCreationalContext(Contextual<T>), is the only one who can supply any @Dependent-scoped Context implementations, no matter what the specification says.

Returning back to what the end user should “do” with a CreationalContext: the answer is basically ignore it. If you are writing a custom Bean implementation, then as the last operation in your delete implementation you can do this:

if (cc != null) {
  cc.release();
}

Otherwise, just leave the thing alone.

Decoding the Magic in Weld’s Instance Injection

I went down this rathole today and wanted to write it down.

In CDI, let’s say you have an injection point like this:

@Inject
@Flabrous // qualifier, let's say
private Instance<Frobnicator> flabrousFrobnicators;

The container is obligated to provide a built-in bean, whatever that is, that can satisfy any injection point whose raw type is Instance.

If you think about this for a moment or two you’ll realize that this is really weird. The container cannot possibly know “in advance” what injection points there will be, and so can’t actually create one bean for a @Default Instance<Frobnicator> and another for a @Flabrous Instance<Frobnicator>. So somehow its built-in bean has to be findable and appropriate for any possible combination of parameterized type (whose raw type is Instance) and sets of qualifiers.

Weld solves this problem by rewriting your injection point quietly on the fly (or at least this is one way to look at it). This was quite surprising and I was glad to finally find out how this machinery works.

For example, in the code above, as part of my injection point resolution request I have effectively said: “Hey, Weld, find me a contextual reference to a contextual instance of the appropriate bean found among all beans that are assignable to an Instance<Frobnicator>-typed injection point and that have the @Flabrous qualifier among their qualifiers.” Of course, Weld cannot actually issue the bean-finding part of this request as-is, because there is no such bean (how could it possibly pre-create an Instance<Frobnicator>-typed bean with @Flabrous among its qualifiers?). So how does this work, exactly? Something must be going on with @Any but it’s nowhere to be seen here and isn’t applied by default to injection points.

It turns out Weld recognizes a class of beans that they call façade beans for which all injection requests are effectively rewritten (during the bean-finding part of the resolution process). Instance is one kind; Event is another; Provider is another and so on—you can see why they’ve decided these are special sorts of things.

At any rate, when you ask for a façade bean, the request that is made for the bean itself uses only the @Any qualifier, no matter what you’ve annotated your injection point with. All beans, including built-in ones, have the @Any qualifier, so the one true container-provided Instance bean will be found. And there’s our answer.

OK, that’s fine, but in the example above now we have a qualifier, @Flabrous, that we actually want to use, or we wouldn’t have gone to all this trouble. How does that get applied, given that it is ignored in the bean sourcing part of the injection resolution request?

Weld has tricked its own innards into supplying what is technically an inappropriate bean—it pretended that we asked for an @Any-qualified Instance<Frobnicator> bean even though we didn’t—but now that it has it, it can ignore whatever qualifiers the bean bears (@Default and @Any, as it turns out, and none other) because they’re no longer relevant once the bean is found. All that matters now is contextual instance mechanics.

Because Instance and Event and Provider and other façade beans are required to be in @Dependent scope, it turns out that the current injection point is available and can be used internally by the bean itself to find out what qualifiers are in effect so that it can create an appropriate contextual instance. And that’s exactly what happens: the bean supplied by Weld is an extension of AbstractFacade which uses the injection point to determine what qualifiers are in effect.

This whole process is of course deeply weird and I’d imagine that it or derivative effects rely on a hard-coded list of façade beans somewhere. Sure enough, here’s an example of the sort of thing I mean.

Another way to approach this sort of thing might be to introduce a super-qualifier or something instead that says, hey, if a bean is qualified with this super-qualifier then it matches all qualifier comparison requests (which is really what’s going on here).

Anyway, I hate magic and am glad to have found out how this works!

ByteBuddy and private static final fields

Boy is this amazingly difficult. I’m writing it here so I won’t forget. I hope this helps someone else. Hopefully, too, there is a less verbose way to accomplish this.

The excerpt below does private static final MethodHandle gorp = MethodHandles.lookup().findStatic(TestPrivateStaticFinalFieldInitialization.class, "goop", MethodType.methodType(void.class)); in ByteBuddy. *** goop shows up on the console at the end. I have a StackOverflow post in case this changes.

Awful formatting courtesy of your friends at WordPress:

// Excerpt from JUnit Jupiter unit test whose class is named
// TestPrivateStaticFinalFieldInitialization:

  @Test
  final void testAll() throws Throwable {

    final MethodDescription findStaticMethodDescription = new TypeDescription.ForLoadedType(MethodHandles.Lookup.class)
      .getDeclaredMethods()
      .filter(ElementMatchers.named("findStatic"))
      .getOnly();
    
    final MethodDescription methodHandlesLookupMethodDescription = new TypeDescription.ForLoadedType(MethodHandles.class)
      .getDeclaredMethods()
      .filter(ElementMatchers.named("lookup"))
      .getOnly();

    final MethodDescription methodTypeMethodTypeMethodDescription = new TypeDescription.ForLoadedType(MethodType.class)
      .getDeclaredMethods()
      .filter(ElementMatchers.named("methodType")
              .and(ElementMatchers.isStatic()
                   .and(ElementMatchers.takesArguments(Class.class))))
      .getOnly();
    
    final ByteBuddy byteBuddy = new ByteBuddy();
    DynamicType.Builder<?> builder = byteBuddy.subclass(Object.class);
    builder = builder
      .defineField("gorp", MethodHandle.class, Visibility.PRIVATE, Ownership.STATIC, SyntheticState.SYNTHETIC, FieldManifestation.FINAL)
      .invokable(ElementMatchers.isTypeInitializer())
      .intercept(MethodCall.invoke(findStaticMethodDescription)
                 .onMethodCall(MethodCall.invoke(methodHandlesLookupMethodDescription))
                 .with(new TypeDescription.ForLoadedType(TestPrivateStaticFinalFieldInitialization.class))
                 .with("goop")
                 .withMethodCall(MethodCall.invoke(methodTypeMethodTypeMethodDescription)
                                 .with(new TypeDescription.ForLoadedType(void.class)))
                 .setsField(new FieldDescription.Latent(builder.toTypeDescription(),
                                                        "gorp",
                                                        ModifierContributor.Resolver.of(Visibility.PRIVATE,
                                                                                        Ownership.STATIC,
                                                                                        SyntheticState.SYNTHETIC,
                                                                                        FieldManifestation.FINAL).resolve(),
                                                        TypeDescription.Generic.OfNonGenericType.ForLoadedType.of(MethodHandle.class),
                                                        Collections.emptyList())));
    final Class<?> newClass = builder.make().load(Thread.currentThread().getContextClassLoader()).getLoaded();
    final Field gorpField = newClass.getDeclaredField("gorp");
    gorpField.setAccessible(true);
    final MethodHandle methodHandle = (MethodHandle)gorpField.get(null);
    assertNotNull(methodHandle);
    methodHandle.invokeExact();
  }

  public static final void goop() {
    System.out.println("*** goop");
  }

On Terminology

The hardest thing in software engineering is naming things.

We have some conventions, but not a lot. Many of those conventions come from design patterns. For example, we have builders and adapters and factories and visitors and so on.

But there are strikingly few conventions about how to name other things. For example, when implementing an interface that consists of a single method that can return something either new or old, what should we call it? The JDK has settled on the term Supplier, which maybe is fine, but then the method is called get, rather than supply. Does get really capture what a Supplier does? Again, naming things is hard.

As another example, sometimes factories assemble things out of raw materials—and then simply return what they’ve assembled, over and over. Is that actually what a factory does? No, it is not. Naming things is hard.

My own personal dictionary includes these concepts and I try to use them very carefully in my own software:

  • A supplier may create something or return a single instance of a prefabricated something, or switch arbitrarily. I avoid producer or provider since they don’t really convey why something is being retrieved or made: when something is supplied, by contrast, it is because there is a need, and the supplying fulfills the need.
  • A factory always creates something.
  • If something is backed, then it is implemented in terms of something else. This lets me do things with the adapter pattern but adapter is a terrible word that doesn’t tell you what is being adapted to what and hence which aspect is more primal.
  • If something is default, then it is usually a straightforward something that can be extended or overridden or otherwise made more complicated or performant or interesting. I try to avoid simple, since simplicity should be an emergent property, not something legislated.
  • I try to avoid the word provision since for very strange reasons in the computer industry it often means to create something out of thin air, rather than its English meaning, which is to stock. (When you provision your pantry, you don’t build the pantry, you put cans on its shelves.)
  • Priority is always always always largest-number-wins. Unlike most bug systems I’ve worked with, in English the highest priority problem is the one deserving the most attention. (If you want smallest-number-wins, you’re probably looking for rank. Avoid the use of ordinal entirely since many projects use it to mean priority, and others use it to mean something roughly akin to an indicator of which item should come out of an array first.)
  • An arbiter is something that takes in two or more inputs that may have some ambiguity (or not) and performs arbitration on them, selecting a single output for further processing.
  • If I am tempted to use the word module in any way, shape or form, then I know I have failed spectacularly in every possible way. Something that is a module is inscrutable, which is a fancy way of saying that you really have no idea what it is. Component is rarely any better. Feature is even worse.
  • Name is usually an indication that I haven’t thought the problem domain through well enough. Same goes for description. Both have inherent localization issues as well.
  • A facet is a selection of notional attributes of some other thing that go together. This is a nice terminology pattern to use to keep your classes small and to encourage composition.
  • Helper is right out. Same goes for util or utils or utility. If I am tempted to write any of these, I write crap instead so that it is quite clear that that is what I am creating.
  • In the realm of configuration, a setting is a name. A value or a setting value is a value for a setting. When you have many settings, you have many names, not many values, or at least you’re not talking about the values. (I deliberately try to avoid configuration and property since these are massively overloaded and confusing: is configuration a bunch of something, or just one thing? Is a property a name-value pair, or just a name? Or just a value?)

I’m sure there’s more where this came from. What are some of your terminology systems?

ByteBuddy and Proxies

Here’s another one that I am sure I’m going to forget how to do so I’m writing it down.

ByteBuddy is a terrific little tool for working with Java bytecode.  It, like many tools, however, is somehow both exquisitely documented and infuriatingly opaque.

ByteBuddy works with a domain-specific language (DSL) to represent the world of manipulating Java bytecode at runtime.  For a, uh, seasoned veteran (yeah, let’s go with that) like me, grappling with the so-called fluent API is quite difficult.  But I’ve figured out that everything is there if you need it.  You just need the magic recipe.  Sometimes even with the help of an IDE the magic recipe is akin to spellcasting.

So here is the magic recipe for defining a runtime proxy that forwards certain method invocations to the return value of a method that yields up the “real” object being proxied:

import net.bytebuddy.description.modifier.Visibility;
import net.bytebuddy.dynamic.DynamicType;
import net.bytebuddy.implementation.FieldAccessor;
import net.bytebuddy.implementation.MethodCall;
import static net.bytebuddy.implementation.MethodCall.invoke;
import static net.bytebuddy.matcher.ElementMatchers.named;
DynamicType.Builder<?> builder = //… acquire the builder, then:
.defineField("proxiedInstance", theClassBeingProxied, Visibility.PRIVATE) // (1)
.implement(new DefaultParameterizedType(null, Proxy.class, theClassBeingProxied)) // (2)
.intercept(FieldAccessor.ofBeanProperty()) // (3)
.method(someMatcher) // (4)
.intercept(invoke(MethodCall.MethodLocator.ForInstrumentedMethod.INSTANCE) // (5)
.onMethodCall(invoke(named("getProxiedInstance")))
.withAllArguments());
// 1: Adds a field to the proxy class named proxiedInstance. It will hold the "real" object.
// 2: Proxy.class is a made-up interface defining getProxiedInstance()/setProxiedInstance(T),
// where T is the type of the thing being proxied; e.g. Proxy<Frob>.
// DefaultParameterizedType is a made-up implementation of java.lang.reflect.ParameterizedType.
// 3: Magic ByteBuddy incantation to implement the Proxy<Frob> interface by making two methods
// that read from and write to the proxiedInstance field just defined
// 4: Choose what methods to intercept here; see the net.bytebuddy.matcher.ElementMatchers class
// in particular
// 5: The serious magic is here. It means, roughly, "whatever the method the user just called,
// turn around and invoke it on the return value of the getProxiedInstance() method with all
// of the arguments the user originally supplied". That INSTANCE object is not documented
// anywhere, really; you just have to know that it is suitable for use here in this DSL
// "sentence".