A Blog Post A Day

I’ve got some interesting things cooking in the personal projects department related to Java, CDI 2.0, configuration and Maven and I am going to blog about them with (I hope!) a post (or several!) a day for a little bit.

We’ll start with configuration.

You may recall my earlier posts on the subject.

Now my ramblings have code behind them!  See MicroBean Configuration API, MicroBean Configuration and MicroBean Configuration CDI.

MicroBean Configuration API is the Java API that defines an API around my notion of configuration coordinates.

MicroBean Configuration is a Java SE implementation of that API.

MicroBean Configuration CDI is a CDI extension that uses MicroBean Configuration to expose configuration values to CDI environments.

I’ll have more to say about these (early days!) projects and others over the coming days.  Thanks for reading.


Calling Maven Artifact Resolver From Within CDI 2.0

So I’ve been continuing to play with my CDI-and-linking idea, and central to it is the ability to locate CDI “modules” Out There In The World™.  The world, in this case, is Maven Central.  Well, or perhaps a local mirror of it.  Or maybe your employer’s private Nexus repository fronting it.  Oh jeez, we’re going to have to really use Maven’s innards to do this, aren’t we?

As noted earlier, even just figuring out what innards to use is hard.  So I figured out that the project formerly known as Æther, Maven Artifact Resolver, whose artifact identifier is maven-resolver, is the one to grab.

Then, upon receiving it and opening it up, I realized that the whole thing is driven by Guice—or, if you aren’t into that sort of thing, by a homegrown service locator (which itself is a service, which leads to all sorts of other Jamie Zawinski-esque questions).

The only recipes left over are from the old Æther days and require a bit of squinting to make work.  They are also staggeringly complicated.  Here’s a gist that downloads the (arbitrarily selected) org.microbean:microbean-configuration-cdi:0.1.0 artifact and its transitive, compile-scoped dependencies, taking into account local repositories, the user’s Maven ~/.m2/settings.xml file, active Maven profiles and other things that we all take for granted:

That seems like an awful lot of work to have to do just to get some stuff over the wire.  It also uses the cheesy homegrown service locator which as we all know is not The Future™.

For my purposes, I wanted to junk the service locator and run this from within a CDI 2.0 environment, both because it would be cool and dangerous and unexpected, and because the whole library was written assuming dependency injection in the first place.

So I wrote a portable extension that basically does the job that the cheesy homegrown service locator does, but deferring all the wiring and validation work to CDI, where it belongs.

As if this whole thing weren’t hairy enough already, a good number of the components involved are Plexus components.  Plexus was a dependency injection framework and container from a ways back now that also had a notion of what constituted beans and injection points.  They called them components and requirements.

So a good portion of some of the internal Maven objects are annotated with Component and Requirement.  These correspond roughly—very, very roughly—to bean-defining annotations and injection points, respectively.

So I wrote two portable extension methods.  One uses the role element from Component to figure out what kind of Typed annotation to add to Plexus components.  The other turns a Requirement annotation with a hint into a valid CDI injection point with an additional qualifier.

(Along the way, these uncovered MNG-6190, which indicates that not very many people are even using the Maven Artifact Resolver project in any way, or at least not from within a dependency injection container, which is, of course, how it is designed to be used.  That’s a shame, because although it is overengineered and fiddly to the point of being virtually inscrutable, it is, as a result, perhaps, quite powerful.)

Then the rest of the effort was split between finding the right types to add into the CDI container, and figuring out how to adapt certain Guice-ish conventions to the CDI world.

The end result is that the huge snarling gist above gets whittled down to five or so lines of code, with CDI doing the scope management and wiring for you.

This should mean that you can now relatively easily incorporate the guts of Maven into your CDI applications for the purposes of downloading and resolving artifacts on demand.  See the microbean-maven-cdi project for more information.  Thanks for reading.

Maven and the Project Formerly Known As Æther

I am hacking and exploring my ideas around CDI and linking and part of whatever that might become will be Maven artifact resolution.

If, like me, you’ve been working with Maven since the 1.0 days (!), you may be amused by the long, tortured path the dependency resolution machine at its core has taken over the years.

First, there was Maven artifact resolution baked into the core.

Then Jason decided that it might be better if this were broken off into its own project.  So it became Sonatype Æther and rapidly grew to incorporate every feature under the sun except an email client.  It got transferred to Sonatype which was the company he was running at the time.

Then people got very enamored of Eclipse in general, and so Æther, without any changes, got moved to the Eclipse Foundation.  Then the package names changed and a million trillion confused developers tried to figure things out on StackOverflow.

Then it turned out that no one other than Maven and maybe some of Jason’s companies’ work was using Æther in either Sonatype or Eclipse form, so Eclipse Æther has just recently been—wait for it—folded back into Maven.  Of course, the Eclipse Æther page doesn’t (seem to?) mention this, and if you try to go to its documentation page then at least as of this writing you get a 500 error.  If you hunt a little bit harder, you find another related page that finally tells you the whole thing has been archived.

So anytime you see the word Æther you should now think Maven Artifact Resolver.

But make sure that you don’t therefore think that the name of the actual Maven artifact representing this project is maven-artifact-resolver, because that is the old artifact embodying the 2009 version of Maven’s dependency resolution code!  The new artifact name for the Maven Artifact Resolver project is simply maven-resolver.  Still with me?

Fine. So you’ve navigated all this and now you want to work with Maven Artifact Resolver (maven-resolver), the project formerly known as Some Kind of Æther.

If, like me, you want to use the machinery outside of Maven proper, then you are probably conditioned, like me, to look for some sort of API artifact. Sure enough, there is one.  (Note that for all the history I’ve covered above, the package names confusingly still start with org.eclipse.aether.)

Now you need to know what implementation to back this with. This is not as straightforward as it seems. The Maven project page teases you with a few hints, but it turns out that merely using these various other Maven projects probably won’t get you where you want to go, which in most cases is probably a Maven-like experience (reading from .m2/settings.xml files, being able to work with local repositories, etc. etc.).  (Recall that the project formerly known as Some Kind of Æther and now known as Maven Artifact Resolver expanded to become very, very flexible at the expense of being simple to use, so it can read from repositories that aren’t just Maven repositories, so it inherently doesn’t know anything about Maven, even though now it has been folded back into the Maven codebase.)

Fortunately, in the history of all this, there was a class called MavenRepositorySystemUtils.  If you search for this, you’ll find its javadocs, but these are not the javadocs you’re looking for.  Let’s pretend for a moment they were: you would, if you used this class, be able to get a RepositorySystem rather easily:

final ServiceLocator serviceLocator = MavenRepositorySystemUtils.newServiceLocator();
assert serviceLocator != null;
final RepositorySystem repositorySystem = serviceLocator.getService(RepositorySystem.class);
assert repositorySystem != null;

But these javadocs are for a class that uses Eclipse Æther, so no soup for you.

So now what?  It turns out that that class still exists and has been refactored to use Maven Artifact Resolver (maven-resolver), but its project page and javadocs and whatnot don’t exist (yet?).  The artifact you’re looking for, then, turns out to be maven-resolver-provider, and as of this writing exists only in its 3.5.0-alpha-1 version.

So if you make sure that artifact is available then you’ll be able to use the Maven Artifact Resolver APIs to interact with Maven repositories and download and resolve dependencies.

Concurrent Testing

Finally had some time at JavaOne to sit down and whack away at something that I have never had the time to explore thoroughly: parallel JUnit testing using the Maven Surefire plugin.

The executive overview is that you can run your JUnit tests in parallel in a number of different ways, and it’s worth understanding them all thoroughly so you don’t inadvertently structure your tests in such a way that parallelism is impossible.

To start with, let’s look at the general architecture of a normal, non-exotic JUnit test as run by Surefire.

Unless you’ve done something unusual, you likely have a class or two lying around in your src/test/java tree named TestCaseSomethingOrOther.java. And in that test class you likely have various methods annotated with @Test.

If you run this with Surefire 2.16 out of the box, you’ll note that the following things happen in order:

  1. The Maven JVM forks exactly one additional JVM. Its settings are taken from the maven-surefire-plugin:test goal’s argLine property (or defaulted).
  2. The Surefire JVM just forked loads your test class and instructs JUnit to run it.
  3. JUnit runs any @BeforeClass-annotated methods in your class.
  4. For each test method:
    1. JUnit creates a new instance of your test class.
    2. JUnit runs any @Rule-annotated public fields in your test class.
    3. JUnit runs any @Before-annotated methods in your test class.
    4. JUnit executes your test method.
    5. JUnit runs any @After-annotated methods in your test class.
  5. JUnit runs any @AfterClass-annotated methods in your class.

…and that’s it.

It’s important to note that a new instance of your test class is created for each test method that is run. File that away for a moment.

Now, as regards parallelism, we can control the number of threads that are dedicated to running JUnit test methods, and we can control the number of processes that can run these threads.

First, let’s look at the processes. We can control these in Surefire using the forkCount and reuseForks properties.

I don’t know about you, but these terms were somewhat confusing. forkCount is the maximum number of forked JVMs that can be running in parallel at any time. It says nothing about the total number of these forked JVMs that might exist over time. reuseForks makes it so that the number of total operating system processes spawned over time is either governed (and equal to the forkCount) or ungoverned. So if you want to ensure that only two processes, period, are created by Surefire, then you want <forkCount>2</forkCount> and <reuseForks>true</reuseForks>. If, on the other hand, you don’t really care how many processes Surefire ends up spawning and killing, but you want to make sure that no more than two are running at the same time, then you want <forkCount>2</forkCount> and <reuseForks>false</reuseForks>.

Normally you want to reuse forks. The only time I could think of where you wouldn’t want that is if your tests exercise some kind of static singleton or something that has state inside it that you can’t reset. If that’s true, then if you reuse forks it is possible that the state of this singleton could pollute subsequent tests.

Next, let’s look at methods.

You can parallelize test methods at the thread level, but not at the process level.

A JUnit test method, as we’ve seen, is conceptually equivalent to a constructor invocation, some setup work, the method invocation and some teardown work. There’s no way to instruct Surefire to somehow create a new JVM process as well for this; process parallelism stops at the class level. (Creating a separate process for each test method invocation would be a little nuts; if you really need that you can create JUnit test classes that take great care to have only one test method in them. You could probably do something else too with JUnit suites.)

So we’re looking at threads when we’re looking at parallelizing JUnit test methods. You can indicate that you want Surefire to run your test methods in parallel by using the aptly-named parallel property. Together with the threadCount, perCoreThreadCount and useUnlimitedThreads properties, you can control how many threads are spawned to run test methods.

Note that if you have more than one thread rummaging around in your test case, the static information (if any) your test might have needs to be thread safe. It is best of course if you can avoid it to not have any such static information that can change.

Recall as well that a test method invocation is also semantically a constructor invocation, so oddly enough while your static information must be thread safe, your instance information does not have to be thread safe.

Regarding things like database connections and whatnot, you want to make sure that your test methods are as isolated as humanly possible—or at least that (if they are not) they take great care to lock on shared resources.

The maven-ear-plugin and the ear packaging type and why several goals get run

When building a Maven artifact of type ear, the ear packaging type is invoked.

The maven-ear-plugin replaces the maven-jar-plugin at this point, so it is as though you included it in your pom.xml.

The generate-application-xml goal binds by default to the generate-resources phase.

The ear goal binds by default to the package phase.

So at the end of the day by declaring a packaging type of ear, you cause two goals—not one—to be run.