Tag Archives: java

MicroBean Helm: Charts.install(URL) now works

I’ve just released another snapshot of my microbean-helm project.

The latest addition is a façade class named Charts.

Among other things, it functions as a simple entry point into the common Helm-like operations you might want to do from within your Java program or library.

Have a look at the install() methods.  These install a Helm chart into a Kubernetes cluster given a URL to the chart location in one relatively easy invocation.  Happy Helming from Java!

Starting CDI beans eagerly and portably

There are several web resources out there that describe how to get a CDI bean (usually ApplicationScoped) to be instantiated when the CDI container comes up.  Here’s an arbitrarily selected one from Dan Allen:

Let’s look at the extension immediately above.

The AfterDeploymentValidation event is the only event a portable extension can observe in a portable manner that indicates that the container is open for business. It is guaranteed to fire last in the dance that the container performs as it starts up.

In the extension above, then, you can see that it has saved off the beans it has discovered that have been annotated with a Startup annotation (which isn’t defined in the gist, so it could be javax.ejb.Startup or a Startup annotation that Dan has defined himself). Then, for these beans, it uses the BeanManager‘s getReference() method to get an actual Java object representing those beans.

So what’s the toString() call for?

The short answer is: this is the only method that could conceivably cause the container-managed client proxy that is the object reference here to call the actual constructor of the actual underlying object that the user and everyone else is actually interested in.

For more detail, let’s look at the specification, section 6.5.3, which concerns contextual references—the return value of the getReference() method above—and which reads in part:

If the bean has a normal scope [e.g. not Dependent or Singleton], then the contextual reference for the bean is a client proxy, as defined in Client proxies, created by the container, that implements the given bean type and all bean types of the bean which are Java interfaces.

OK, so whatever the getReference() method returns is a client proxy. That means it’s going to forward any method invocation it receives to the object that it’s proxying (known as the bean’s contextual instance. CDI containers try to be efficient, so that object may not have been created yet—the proxy, in other words, might be lazy. So here, the contextual reference returned is a contextual reference (a client proxy) of type Object, and the toString() invocation causes the contextual reference (client proxy) to realize that oh, hey, the underlying contextual instance of whatever bean this is doesn’t yet exist, and to therefore invoke the “real” constructor, so that the “real” object (the contextual instance) can return something from its toString() method.

As a happy side effect, the container is of course the one causing the instantiation of the “real” object, so it is going to perform dependency injection, invoke initializer methods, invoke any PostConstruct-annotated methods, and so on.  Presto: you have a portable way for your bean to be instantiated when the container comes up.

(Side note: toString() in particular is used because there is a little nugget at the bottom of the Client Proxies section of the specification that says, innocuously (emphasis mine):

The behavior of all methods declared by java.lang.Object, except for toString(), is undefined for a client proxy. Portable applications should not invoke any method declared by java.lang.Object, except for toString(), on a client proxy.

File that one away: among other things that might mean don’t put your CDI-managed objects in Maps or Collections, since doing so will use client proxy equals(Object) and hashCode() methods!)

So.  If you’re feeling like this whole thing is a bit of a steaming hack—you call toString(), and a whole lot of very sophisticated magic happens (!)—I can’t disagree.

Fortunately, there’s a better way.

Instead of the toString() hack above, you can do the same thing in the extension in a more sanctioned, more explicit manner.

The first thing to understand is: who is doing the actual creation?  If we can understand that, then we can understand how to do it eagerly and idiomatically.

The ultimate answer is: the bean itself, via the create() method it implements from the Contextual interface. So if you ever get a Bean in your hand (or any other Contextual), you can call create() on it all day long and get new instances (this should sound scary, because it is).  For example, suppose you have an ApplicationScoped-annotated bean in your hand in the form of a Bean object.  If you call create() on it three times, you will create three instances of this object, hopefully surprising and gently horrifying you. Those instances may be wrapped in proxies (for interception and such), but they won’t be wrapped in client proxies.  Don’t do this!

Still, scary or not, we’ve found the method that some part of the container somewhere at some point will invoke when a contextual instance is needed (to further wrap in a contextual reference). But obviously when you inject an ApplicationScoped-annotated bean into, say, three different locations in your code somewhere, everything behaves as though there’s only one instance, not three. So clearly the container is not running around rampantly calling create() every time it needs an object.  It’s acquiring them from somewhere, and having them created if necessary.

The container is actually calling a particular get() method on the bean’s associated Context, the machinery that implements its scope. This method’s contract basically says that the Context implementation should decide whether to return an existing contextual instance, or a new one. So you can see that the Context underlying application scope will (hopefully!) return the One True Instance™ of the bean in question, whereas the Context implementation underlying some other scope may create new instances each time.

OK, so what do we know?  Let’s step back.

So to eagerly instantiate beans at startup while respecting their scopes, we should follow this approach in the extension as well—we’ll do basically what a lazy contextual reference (client proxy) does, in other words, when a toString() method is invoked on it.  We’ll need to effectively ask the right Context for a contextual instance of the bean in question, creating it if necessary.  This also would allow for custom-scoped beans to be instantiated eagerly, provided of course that the Context backing the custom scope is actually active.

If you have a BeanManager handy, and a Bean object of a particular type (say Bean<Object>), then you can get the relevant Context quite easily:

final Context context = beanManager.getContext(bean.getScope());

Then you need a CreationalContext, the somewhat opaque object that assists the Context with creation, should the Context decide that creation is necessary:

final CreationalContext<Object> cc = beanManager.createCreationalContext(bean);

Finally, we can ask the Context for a contextual instance directly:

final Object contextualInstanceNotContextualReference = context.get(bean, cc);

Note, as the variable name makes clear, this is a contextual instance, not a contextual reference!  This is not a client proxy!  So you really don’t want to use it after this point.

So if you replace Dan’s line 19 above with these code snippets, you will eagerly instantiate beans at startup without relying on magic side effects.

Java-EE-compatible AbstractExecutorService Implementation (Part 2)

Here is another take on my previous post—this time using a @Singleton EJB with bean-managed concurrency.  First, I introduce a valid interface, but one which is essentially identical to Executor, with the added restriction that execution must be asynchronous (while this is a common thing for “regular” Executors to do, it is not strictly speaking required of them).  I do this for the essential bits of @Asynchronous dispatching (once you see that code below you’ll see why).  While you could certainly use this as a business interface, I wouldn’t really expect you to.  Here’s the AsynchronousExecutor interface:

public interface AsynchronousExecutor {
  public void executeAsynchronously(final Runnable runnable);
And now the re-done ExecutorServiceBean:

@Local({Executor.class, ExecutorService.class, AsynchronousExecutor.class})
public class ExecutorServiceBean extends AbstractExecutorService implements ExecutorService, AsynchronousExecutor {

  private SessionContext sessionContext;

  public void execute(final Runnable runnable) {
    if (runnable != null) {
      if (this.sessionContext != null) {
        final AsynchronousExecutor self = this.sessionContext.getBusinessObject(AsynchronousExecutor.class);
        assert self != null;
      } else {

  public void executeAsynchronously(final Runnable runnable) {
    if (runnable != null) {

  public boolean awaitTermination(final long timeout, final TimeUnit unit) {
    return false;

  public boolean isTerminated() {
    return false;

  public boolean isShutdown() {
    return false;

  public void shutdown() {


  public List<Runnable> shutdownNow() {
    return Collections.emptyList();


I made a couple of changes:
  • First, I took advantage of the fact that
    AbstractExecutorService actually arranges all the work for you such that really all you have to override is the Executor#execute(Runnable) method.  Specifically, I no longer override the AbstractExecutorService#submit(Callable) method.
  • Second, I made sure that the invocation of the execute(Runnable) method is not itself declared to be @Asynchronous, but delegates this call to the executeAsynchronously(Runnable) method, which is declared to be @Asynchronous.  This is because the innards of the AbstractExecutorService class call the execute(Runnable) method directly.  Since the asynchronous behavior of an @Asynchronous-annotated method can only be accomplished if the control flow progresses through the container, this cumbersome construct is necessary.  One big assumption I made here is that access to the SessionContext object is already properly synchronized by the container, so I performed no additional locking here.  I didn’t find anything in the EJB specification to support or contradict this assumption.
  • The bean is now a @Singleton with bean-managed concurrency.  Even though it has bean-managed concurrency, there are no explicit synchronization primitives anywhere, because they are locked up (ha ha) inside the AbstractExecutorService innards, which do whatever they do, in disturbingly precise and awesome Doug Lea fashion.
  • The bean’s set of @Local business interfaces now includes AsynchronousExecutor.class, so that the container dispatch by way of SessionContext#getBusinessObject(Class) can work.  Like I said earlier, you could use this as a business interface, but it probably would be excessively narrow.
Thanks for all the comments and input.

Thread safety and PermissionCollection

I just made an interesting discovery while implementing the PolicyConfiguration#addToUncheckedPolicy(PermissionCollection) method.

Notice anything unusual about the contract??? No?

Consider this: PermissionCollection subclasses need to be thread-safe, as the Javadoc says:

Subclass implementations of PermissionCollection should assume that they may be called simultaneously from multiple threads, and therefore should be synchronized properly.

OK, so when implementing this method, you might take the na??ve approach, as I did initially.?? The incoming PermissionCollection is thread-safe, so there's no need to synchronize on anything.?? Hopefully you're cringing at my haste:

private final PermissionCollection uncheckedPermissions = new Permissions();

public void addToUncheckedPolicy(final PermissionCollection permissionCollection) {
?? if (permissionCollection != null && this.uncheckedPermissions != null) {
?????? // Look, ma, no synchronization!

?????? final Enumeration<Permission> elements = permissionCollection.elements();
?????? if (elements != null) {
?? ?? ?? while (elements.hasMoreElements()) {
?? ?? ?? ?? final Permission permission = elements.nextElement();
?????? ?? ?? if (permission != null && !this.uncheckedPermissions.implies(permission)) {
?????? ?? ?? ?? this.uncheckedPermissions.add(permission);
?????? ?? ?? }
?????????? }
?????? }
?? }

You could probably get away with this.?? After all, you know that the uncheckedPermissions member variable is guaranteed to be thread-safe (java.security.Permissions instances, like all PermissionCollection subclasses, must be "synchronized properly").?? Surely there's nothing more to worry about?

Wrong.?? JACC makes no guarantees one way or the other about what is happening to the incoming PermissionCollection that you're adding in this method.?? This PermissionCollection could be being modified by some other thread while you're processing it.?? So your elements() call–your enumeration of the individual Permission instances "inside" that PermissionCollection–will be inconsistent and broken.

OK, you think (I think), I'll just synchronize on the incoming PermissionCollection object.?? But there's nothing in the PermissionCollection documentation that indicates that PermissionCollection objects must synchronize on themselves during modification operations (such as add()).?? They can synchronize on whatever they want and are under no obligation to tell you what that mutex is going to be.?? Nine times out of then you're going to be handed a Permissions instance, of course, and if you go look at the source, yes, indeed, the PermissionCollection implementation inside that synchronizes on itself.?? But it certainly doesn't have to, and you shouldn't rely on it.

In the absence of further guarantees, all you can do is add the incoming PermissionCollection to a set of such PermissionCollections, and then, when you actually need to check permissions, walk through the set one by one and call implies() on each PermissionCollection.

The takeaway here is that in any code that you're using that enumerates a PermissionCollection, you're probably doing it wrong unless you have control of "both sides" of the PermissionCollection–unless you control both when Permissions are added to it and when they are enumerated.


One thing I believe very strongly in is good Javadocs.

It's one of the first places that I go when I need to understand how to use a library.?? I don't attempt to understand how to use the library through my IDE's autocomplete statements or the source code.?? I often end up in the source code, but that's usually because the API has not been well-designed or well-documented.

Good Javadocs are very hard to put together because at the same time you begin to put them together you run face-to-face with the weaknesses in your API.?? So you change the API to correct the problem, and now your Javadocs need to be torn up.?? Many developers stop there and throw the Javadocs out, or turn up their nose at them because they're not a very agile thing to do.?? But to me, they're absolutely essential–with them and unit testing, you can put together some decent APIs pretty quickly–or, I guess, more accurately, as quickly as you should be putting them together.?? :-)?? Good APIs take time, and the best ones explain themselves.

Here are some of the things that I do when writing Javadocs:

  • For any class name, I wrap it with the {@link tag}, whenever and wherever it appears.?? Think about that.?? This helps keep the reader in the flow: they can always turn down the styles in their browser or take other measures to get rid of blue underlines–I don't worry about too many links–but having the ability to quickly link to the class under discussion is invaluable.
  • For any identifier, or keyword, I wrap it with the {@code} tag.?? This is just following good typographical style.
  • For every parameter, I document whether null is permitted or not.?? This forces me to figure out what to do with nulls in every situation.?? Sometimes I will realize that accepting nulls is a perfectly reasonable thing to do (more often than not–but this is a stylistic question that every developer has a strong opinion on).
  • For every @return tag, I document whether the method can return null or not.?? Every single one.
  • For personal software, I use the @since tag and supply the month, year and date.?? That helps keep me honest and shows what order I've developed things in.
  • I always try to use the @author tag with my hyperlinked email address.
  • I try to use (and wish I were better at using) the @see tag to establish a rough trail through the API.
  • I try to hyperlink portions of explanatory documentation to methods and fields that the documentation refers to {@linkplain Object#equals(Object) as I do here in this example about the <tt>Object#equals(Object)</tt> method}.?? Note the use of the {@linkplain} tag and the nested <tt>s.
  • I have a standard boilerplate Subversion-friendly Javascript hairball that I use for the @version tag (I use keyword expansion of $Revision$):

@version <script type="text/javascript"><!–
document.write("$Revision: 0.0 $".match(/d+.d+/)[0]);
–></script><noscript>$Revision: 0.0 $</noscript>

Hope this helps you think about how you document your own code.