Maven…and CDI?!

I may or may not have gotten the dependency resolution mechanism of Maven to work inside a CDI container.  More information forthcoming.

Advertisements

Concurrent Testing

Finally had some time at JavaOne to sit down and whack away at something that I have never had the time to explore thoroughly: parallel JUnit testing using the Maven Surefire plugin.

The executive overview is that you can run your JUnit tests in parallel in a number of different ways, and it’s worth understanding them all thoroughly so you don’t inadvertently structure your tests in such a way that parallelism is impossible.

To start with, let’s look at the general architecture of a normal, non-exotic JUnit test as run by Surefire.

Unless you’ve done something unusual, you likely have a class or two lying around in your src/test/java tree named TestCaseSomethingOrOther.java. And in that test class you likely have various methods annotated with @Test.

If you run this with Surefire 2.16 out of the box, you’ll note that the following things happen in order:

  1. The Maven JVM forks exactly one additional JVM. Its settings are taken from the maven-surefire-plugin:test goal’s argLine property (or defaulted).
  2. The Surefire JVM just forked loads your test class and instructs JUnit to run it.
  3. JUnit runs any @BeforeClass-annotated methods in your class.
  4. For each test method:
    1. JUnit creates a new instance of your test class.
    2. JUnit runs any @Rule-annotated public fields in your test class.
    3. JUnit runs any @Before-annotated methods in your test class.
    4. JUnit executes your test method.
    5. JUnit runs any @After-annotated methods in your test class.
  5. JUnit runs any @AfterClass-annotated methods in your class.

…and that’s it.

It’s important to note that a new instance of your test class is created for each test method that is run. File that away for a moment.

Now, as regards parallelism, we can control the number of threads that are dedicated to running JUnit test methods, and we can control the number of processes that can run these threads.

First, let’s look at the processes. We can control these in Surefire using the forkCount and reuseForks properties.

I don’t know about you, but these terms were somewhat confusing. forkCount is the maximum number of forked JVMs that can be running in parallel at any time. It says nothing about the total number of these forked JVMs that might exist over time. reuseForks makes it so that the number of total operating system processes spawned over time is either governed (and equal to the forkCount) or ungoverned. So if you want to ensure that only two processes, period, are created by Surefire, then you want <forkCount>2</forkCount> and <reuseForks>true</reuseForks>. If, on the other hand, you don’t really care how many processes Surefire ends up spawning and killing, but you want to make sure that no more than two are running at the same time, then you want <forkCount>2</forkCount> and <reuseForks>false</reuseForks>.

Normally you want to reuse forks. The only time I could think of where you wouldn’t want that is if your tests exercise some kind of static singleton or something that has state inside it that you can’t reset. If that’s true, then if you reuse forks it is possible that the state of this singleton could pollute subsequent tests.

Next, let’s look at methods.

You can parallelize test methods at the thread level, but not at the process level.

A JUnit test method, as we’ve seen, is conceptually equivalent to a constructor invocation, some setup work, the method invocation and some teardown work. There’s no way to instruct Surefire to somehow create a new JVM process as well for this; process parallelism stops at the class level. (Creating a separate process for each test method invocation would be a little nuts; if you really need that you can create JUnit test classes that take great care to have only one test method in them. You could probably do something else too with JUnit suites.)

So we’re looking at threads when we’re looking at parallelizing JUnit test methods. You can indicate that you want Surefire to run your test methods in parallel by using the aptly-named parallel property. Together with the threadCount, perCoreThreadCount and useUnlimitedThreads properties, you can control how many threads are spawned to run test methods.

Note that if you have more than one thread rummaging around in your test case, the static information (if any) your test might have needs to be thread safe. It is best of course if you can avoid it to not have any such static information that can change.

Recall as well that a test method invocation is also semantically a constructor invocation, so oddly enough while your static information must be thread safe, your instance information does not have to be thread safe.

Regarding things like database connections and whatnot, you want to make sure that your test methods are as isolated as humanly possible—or at least that (if they are not) they take great care to lock on shared resources.

The maven-ear-plugin and the ear packaging type and why several goals get run

When building a Maven artifact of type ear, the ear packaging type is invoked.

The maven-ear-plugin replaces the maven-jar-plugin at this point, so it is as though you included it in your pom.xml.

The generate-application-xml goal binds by default to the generate-resources phase.

The ear goal binds by default to the package phase.

So at the end of the day by declaring a packaging type of ear, you cause two goals—not one—to be run.

Getting Jenkins Running On A Mac

I wanted to blog about how to get Jenkins running on a Mac using its installer.Jenkins is a great product, but its frenetic and crazed (but cheerful and enthusiastic) development process often shows through. Case in point: the default Mac installe…

I wanted to blog about how to get Jenkins running on a Mac using its installer.

Jenkins is a great product, but its frenetic and crazed (but cheerful and enthusiastic) development process often shows through.

Case in point: the default Mac installer, which you can download from the jenkins-ci.org website, sets up Jenkins to run as a Mac LaunchDaemon running as user daemon.

Now, there's nothing inherently wrong with this–indeed, it can be quite nice.?? You'll only have one instance of Jenkins running, and no user needs to be logged on for it to do its thing, and if Jenkins ever got hacked you're running as a low-privilege user rather than as some kind of full-fledged user with the ability to ruin your day.

However, this caused some weird problems, nullifying the entire intent of a one-click installer.?? These problems manifest themselves the moment you try to run a Maven build, which suggests to me that this (simple) smoke test is simply not run before new versions of the installer are released.?? Oh well, time to roll up our sleeves and turn the one-click installation process into an exercise in Mac system administration.?? 🙂

What's wrong with daemon?

The first thing to know about user daemon is that his home directory is /var/root.?? That should start to give you a funny feeling.

The reason that should give you a funny feeling is that Maven looks for its settings.xml file in $HOME/.m2.?? Which of course does not exist in /var/root.

So when Jenkins launches, it appears to come up fine.?? But if you try to run a Maven build, you'll get a lovely stack trace about how the file /var/root/.m2 couldn't be created.

When I first encountered this error, I just wanted to get the stupid thing working, so I did:


sudo mkdir -p /var/root/.m2

…and:


sudo chmod a+rwx /var/root/.m2

So this gets Jenkins-running-as-daemon past this problem, but now it wants to create temporary files in /Users/Shared/Jenkins/Home, which it doesn't own, and can't write to.

At any rate, I now realized that I didn't want this thing running as user daemon anyway, because I didn't want him doing anything to /var/root.?? And even if I could somehow tell him to use a different user directory so that $HOME/.m2/settings.xml would be resolved somewhere else, it was clear that I was going to have to edit .plist files.?? So, so much for the installer.?? And as long as the installer wasn't going to work, I decided that I wanted to make Jenkins run as a different kind of daemon user anyway.

This turned out (for this rookie Mac system administrator) to be quite difficult.

The steps involved are:

  1. Create a daemon user (I called mine _jenkins)
  2. Create a daemon group (I called mine???surprise!???_jenkins)
  3. Put the daemon user in the newly-created daemon group
  4. Create the home directory for the new daemon user (/Users/_jenkins in my case)
  5. chown the /Users/Shared/Jenkins directory so that its hierarchy is owned by your new user.
  6. edit /Library/LaunchDaemons/org.jenkins-ci.plist so that it reflects all this information.

Creating the user is a task that should not be accomplished through the usual Mac GUI methods.?? You need to use dscl instead.?? This is because you want to create a daemon user.?? I snooped around for a bit and came up with this lovely tutorial: http://www.minecraftwiki.net/wiki/Tutorials/Create_a_Mac_OS_X_startup_daemon#The_hard_.28and_correct.29_way.?? It walked me through steps 1-4 above.

Then I did:


sudo chown -R _jenkins:_jenkins /Users/Shared/Jenkins

Finally, my /Library/LaunchDaemons/org.jenkins-ci.plist looks like this:


<?xml version="1.0" encoding="UTF-8"?>

<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
?????? <dict>
?????????????? <key>EnvironmentVariables</key>
?????????????? <dict>
?????????????????????????????? <key>JENKINS_HOME</key>
?????????????????????????????? <string>/Users/Shared/Jenkins/Home</string>
?????????????????????????????? <key>_JAVA_OPTIONS</key>
?????????????????????????????? <string>-Dfile.encoding=UTF-8</string>

?????????????? </dict>
?????????????? <key>GroupName</key>
?????????????? <string>_jenkins</string>
?????????????? <key>KeepAlive</key>
?????????????? <true/>
?????????????? <key>Label</key>
?????????????? <string>org.jenkins-ci</string>
?????????????? <key>ProgramArguments</key>
?????????????? <array>
?????????????????????????????? <string>/bin/bash</string>
?????????????????????????????? <string>/Library/Application Support/Jenkins/jenkins-runner.sh</string>
?????????????? </array>
?????????????? <key>RunAtLoad</key>
?????????????? <true/>
?????????????? <key>UserName</key>
?????????????? <string>_jenkins</string>
?????? </dict>
</plist>

I added the _JAVA_OPTIONS environment variable to force UTF-8 encoding.?? This is because no matter what kind of encoding you might specify in your Java code, Java-on-the-Mac's character encoding for what gets put out to the terminal is MacRoman by default (?!).?? You have to get the file.encoding property passed into the JVM early enough so that it is picked up by the rest of the JVM internals, and the only way to do that is to use the special _JAVA_OPTIONS environment variable picked up by all the Java tools in $JAVA_HOME/bin.?? The only unfortunate side effect of all this is that you get a warning printed to the screen on every JVM startup that says, effectively and incomprehensibly, I am using the environment variable you told me to.

Once you've done all this, you can simply stop the launch daemon and it will automatically restart with the new values:


sudo launchctl stop org.jenkins-ci

I hope that helps other Jenkins Mac users out.

jpa-maven-plugin Released

I’m pleased to announce to my enormous reading audience (both of you) the jpa-maven-plugin project. Please peruse the documentation, the Javadocs and then finally try using it and let me know what you think.

I'm pleased to announce to my enormous reading audience (both of you) the jpa-maven-plugin project.

Please peruse the documentation, the Javadocs and then finally try using it and let me know what you think.

Running JPA tests, part 2

(This is part 2. Have a look at part 1 before you continue, or this won’t make much sense.) Now it turns out that all of the JPA providers except Hibernate (this is going to sound familiar after a while) really really really really want you to enh…

(This is part 2.  Have a look at part 1 before you continue, or this won’t make much sense.)

Now it turns out that all of the JPA providers except Hibernate (this is going to sound familiar after a while) really really really really want you to enhance or instrument or weave your entity classes.

First we’ll cover what this is, and then mention the different ways you might go about it.  Then I’ll pick one particular way and show you how to do it.

JPA entities need to be enhanced to enable things like lazy loading and other JPA-provider-specific tests.  The JPA provider might, for example, need to know when a particular property of your entity has changed.  Unless the specification were to have mandated things like PropertyChangeListeners on all properties (which, thankfully, it didn’t), there isn’t any way for the provider to jump in and be notified when a given property changes.

Enter weaving or enhancement (I’ll call it weaving, following EclipseLink’s term).  Weaving is the process where–either at build time or runtime–the JPA provider gets into your classes, roots around, and transforms them using a bytecode processor like Javassist or CGLIB.  Effectively, the JPA provider rewrites some of your code in bytecode so that the end result is a class that can now magically inform the JPA provider when certain things happen to its properties.

Weaving can be done at build time, as I said, or at runtime.

Now, if you’re like most Java EE developers, the reason you’ve never had to deal with weaving is that the Java EE specification requires all JPA providers in a Java EE container to do weaving silently in the background during deployment (if it hasn’t been done already).  So in a JPA 2.0 container like Glassfish or the innards of JBoss, weaving happens automatically when your persistence unit is discovered and deployed.

But the specification does not mandate that such automatic weaving take place when you’re not in a Java EE container.  And if you’re a good unit testing citizen, you want to make sure that your unit test has absolutely no extra layers or dependencies in it other than what it absolutely requires.

So in unit test land, you have to set this up (unless you want to drag in the testing machinery yourself, which is, of course, a viable option, but here we’re focusing on keeping the number of layers to a minimum).

When you go to set up weaving, you have to choose whether you want to do it at build time or at runtime.  I’ve chosen in all three cases to focus on build time weaving.  This has some happy side effects: if you do build-time weaving correctly, then not only do you get faster, more accurate unit tests, but if you perform that weaving in the right place then you can have Maven also deploy JPA-provider-specific versions of your entity classes for you automatically.  That, in turn, means you can install those jar files in your Java EE container and skip the dynamic weaving that it would otherwise have to perform, thus shortening startup time.

Now, all three JPA providers approach build time weaving in a different way (of course).  All three providers provide Ant tasks, but EclipseLink and OpenJPA also provide command line tools.  So we’ll make use of them where we can to avoid the Ant overhead wherever possible.

Regardless of which provider we’re talking about, weaving at build time involves the same necessary inputs:

  • A persistence.xml file somewhere.  This usually lists the classes to be weaved, as well as provider specific properties.  It isn’t (for this purpose) used to connect to any database.
  • The raw classes to be weaved.

Now, wait a minute.  If weaving alters the bytecode of your classes, then what happens if you try to use a Hibernate-weaved class in an EclipseLink persistence unit?

Things blow up, that’s what.

This is where things get regrettably quite complicated.

Before we start weaving, we’re going to need to set up areas for each persistence provider where the weaving may take place.  To set these up, we’re going to step in after compilation and copy the output of plain compilation to each provider’s area.

In Maven speak, anytime you hear the word “copy” you should be thinking about the maven-resources-plugin.  We’re going to have to add that to our pom.xml and configure it to take the classes that result from compiling and copy them to an area for EclipseLink, an area for Hibernate and an area for OpenJPA.

Here is the XML involved for the EclipseLink copy.  This goes in the <plugins> stanza as per usual:

<plugins>
  <plugin>
    <artifactId>maven-resources-plugin</artifactId>
    <executions>
      <execution>
        <id>Copy contents of build.outputDirectory to EclipseLink area</id>
        <goals>
          <goal>copy-resources</goal>
        </goals>
        <phase>process-classes</phase>
        <configuration>
          <resources>
            <resource>
              <filtering>false</filtering>
              <directory>${project.build.outputDirectory}</directory>
            </resource>
          </resources>
          <outputDirectory>${project.build.directory}/eclipselink/classes</outputDirectory>
          <overwrite>true</overwrite>
        </configuration>
      </execution>
      <!– and so on –>
    </executions>
  </plugin>
</plugins>

So during the process-classes phase–which happens after compilation has taken place–we copy everything in ${project.build.outputDirectory}, without filtering, to ${project.build.directory}/eclipselink/classes.  Most commonly, this means copying the directory tree target/classes to the directory tree target/eclipselink/classes.  This area will hold EclipseLink-woven classes that we can later–if we choose–pack up into its own jar file and distribute (with an appropriate classifier).

We’ll repeat this later for the other providers, but for now let’s just stick with EclipseLink.

Before we get to the actual weaving, however, there’s (already) a problem.  Most of the time in any reasonably large project your JPA entities are split up across .jar files.  So it’s all fine and good to talk about weaving entities in a given project, but what about other entities that might get pulled in?  What happens when weaving only happens on some classes and not others?  Unpredictable things, that’s what, so we have to make sure that at unit test time all our entities that are involved in the test–whether they come from the current project or are referred to in other .jar files–somehow get weaved.  This gets tricky when you’re talking about .jar files–how do you weave something in a .jar file without affecting the .jar file?

The answer is you don’t.  You have Maven unpack all your (relevant) dependencies for you, then move the component classes into an area where they, too, can be weaved, just like the entity classes from the current project.  Let’s look at how we’ll set those pom.xml fragments up.  You want to be careful here that these dependencies are only woven for the purposes of unit testing.

The first thing is to make use of the maven-dependency-plugin, which, conveniently enough, features the unpack-dependencies goal.  We’ll configure this to unpack dependencies into ${project.build.directory}/dependency (its default output location):

<plugins>
  <plugin>
    <artifactId>maven-dependency-plugin</artifactId>
    <version>2.2</version>
    <executions>
      <execution>
        <id>Unpack all dependencies so that weaving, instrumentation and enhancement may run on them prior to testing</id>
        <phase>generate-test-resources</phase>
        <goals>
          <goal>unpack-dependencies</goal>
        </goals>
        <configuration>
          <includeGroupIds>com.someotherpackage,${project.groupId}</includeGroupIds>
          <includes>**/*.class</includes>             
        </configuration>
      </execution>
    </executions>
  </plugin>
</plugins>

Here you can see we specify which “group ids” get pulled in–this is just a means of filtering the dependency list.  You can of course alter this any way you see fit.  You’re trying to pull in any JPA entities that are going to be involved in your tests and make sure they get woven, so choose your group ids accordingly, and see the unpack-dependencies documentation for more tweaking you can do here.

So if you were to run mvn clean generate-test-resources at this point, the following things would happen:

  • Your regular classes would get compiled into target/classes.
  • Your regular resources would get copied into target/classes.
  • The entire contents of that directory would then get copied into target/eclipselink/classes.
  • Classes from certain of your dependencies would get extracted into target/dependency, ready for further copying.

Now we’ll copy the unpacked dependency classes into the test weaving area.  This little configuration stanza goes in our prior plugin declaration for the maven-resources-plugin:

<executions>
  <!– other executions –>
  <execution>
    <id>Copy dependencies into EclipseLink test area</id>
    <goals>
      <goal>copy-resources</goal>
    </goals>
    <phase>process-test-resources</phase>
    <configuration>
      <resources>
        <resource>
          <filtering>false</filtering>
          <directory>${project.build.directory}/dependency</directory>
        </resource>
      </resources>
      <outputDirectory>${project.build.directory}/eclipselink/test-classes</outputDirectory>
      <overwrite>true</overwrite>
    </configuration>
  </execution>
</executions>

This is so that the dependencies can be woven with everything else–remember that you’ve got to make sure that all the entities in your unit tests (whether they’re yours or come from another jar involved in the unit test)–are woven.

We have two more bits of copying to do to get our classes all in the right place.  Fortunately they can be combined into the same plugin execution.

The first bit is that we have to take the classes that will have been woven in target/eclipselink/classes and copy them unmolested into the test area so that they can reside there with all the unpacked dependency classes.  This is to preserve classpath semantics.  That is, we’ve already laid down the dependencies inside target/eclipselink/test-classes, so now we need to overlay them with our woven entity classes (obviously once they’ve already been woven) to make sure that in the event of any naming collisions the same semantics apply as would apply with a normal classpath in a normal environment.  At the end of this we’ll have a target/eclipselink/classes directory full of our entity classes that are waiting to be woven, and a target/eclipselink/test-classes directory that will ultimately contain our woven classes as well as those from our dependencies.

The second bit is that since sometimes unit tests define their own entities, we have to make sure that the regular old target/test-classes directory gets copied into the EclipseLink test weaving area as well, and, moreover, we have to make sure this happens last so that any test entities “shadow” any “real” entities with the same name.

As I mentioned, we can accomplish both of these goals with one more execution in maven-resources-plugin:

<execution>
  <id>Copy contents of testOutputDirectory and contents of EclipseLink area to EclipseLink test area</id>
  <phase>process-test-classes</phase>
  <goals>
    <goal>copy-resources</goal>
  </goals
>

  <configuration>
    <resources>
      <resource>
        <filtering>false</filtering>
        <directory>${project.build.directory}/eclipselink/classes</directory>
      </resource>
      <resource>
        <filtering>false</filtering>
        <directory>${project.build.testOutputDirectory}</directory>
      </resource>
    </resources>
    <outputDirectory>${project.build.directory}/eclipselink/test-classes</outputDirectory>
    <overwrite>true</overwrite>
  </configuration>
</execution>

Finally, you’ll recall that I said that there are two inputs needed for weaving:

  1. A persistence.xml file somewhere
  2. The raw classes to be woven

We’ve abused the maven-dependency-plugin and the maven-resources-plugin to get (2).  Now let’s look at (1).

The persistence.xml file that is needed by the EclipseLink weaver is really just used for its <class> elements and its <property> elements.  Pretty much everything else is ignored.  This makes a certain amount of sense: EclipseLink will use it as the definitive source for what classes need to be woven if you don’t tell it anything else, and a particular property (eclipselink.weaving) will instruct EclipseLink that indeed, weaving is to be done and is to be done at build time.

So we’ll put one of these together, and store it in src/eclipselink/resources/META-INF/persistence.xml:

<?xml version=”1.0″ encoding=”UTF-8″?>
<persistence version=”2.0″ xmlns=”http://java.sun.com/xml/ns/persistence” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd“>
  <persistence-unit name=”instrumentation” transaction-type=”RESOURCE_LOCAL”>
    <class>com.foobar.SomeClass1</class>
    <class>com.foobar.SomeClass2</class>
    <class>com.foobar.SomeClassFromSomeDependency</class>
    <properties>
      <property name=”eclipselink.weaving” value=”static” />
    </properties>
  </persistence-unit>
</persistence>

…and, back in our monstrous maven-resources-plugin stanza, we’ll arrange to have it copied:

<execution>
  <id>Copy EclipseLink persistence.xml used to set up static weaving</id>
  <goals>
    <goal>copy-resources</goal>
  </goals>
  <phase>process-classes</phase>
  <configuration>
    <outputDirectory>${project.build.directory}/eclipselink/META-INF</outputDirectory>
    <overwrite>true</overwrite>
    <resources>
      <resource>
        <filtering>true</filtering>
        <directory>src/eclipselink/resources/META-INF</directory>
      </resource>
    </resources>
  </configuration>
</execution>

It’s finally time to configure the weaving.  For EclipseLink, we’ll use the exec-maven-plugin, and we’ll go ahead and run the StaticWeave class in the same Maven process.  We will run it so that it operates in-place on all the classes in the EclipseLink test area.

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>exec-maven-plugin</artifactId>
  <configuration>
    <includePluginDependencies>true</includePluginDependencies>
    <includeProjectDependencies>true</includeProjectDependencies>
  </configuration>
  <dependencies>
    <dependency>
      <groupId>org.eclipse.persistence</groupId>
      <artifactId>org.eclipse.persistence.jpa</artifactId>
      <version>2.2.0</version>
    </dependency>
  </dependencies>
  <executions>
    <execution>
      <id>Statically weave this project’s entities for EclipseLink</id>
      <phase>process-classes</phase>
      <goals>
        <goal>java</goal>
      </goals>
      <configuration>
        <arguments>
          <argument>-persistenceinfo</argument>
          <argument>${project.build.directory}/eclipselink</argument>
          <argument>${project.build.directory}/eclipselink/classes</argument>
          <argument>${project.build.directory}/eclipselink/classes</argument>
        </arguments>
        <classpathScope>compile</classpathScope>
        <mainClass>org.eclipse.persistence.tools.weaving.jpa.StaticWeave</mainClass>
      </configuration>
    </execution>
    <!– there will be other executions –>
  </executions>
</plugin>


This stanza simply runs the StaticWeave class, supplies it with (effectively) -persistenceinfo target/eclipselink as its first effective argument, and then tells it to work in place on the target/eclipselink/classes directory.

In part 3, we’ll put all of this together.

Running JPA tests

I’ve been trying to get to a place where I can achieve all the following goals: Have my domain entities be pure JPA @Entity instances. Run JUnit tests against those entities using the “big three” JPA providers (Hibernate, EclipseLink and OpenJPA)….

I’ve been trying to get to a place where I can achieve all the following goals:

  • Have my domain entities be pure JPA @Entity instances.
  • Run JUnit tests against those entities using the “big three” JPA providers (Hibernate, EclipseLink and OpenJPA).
  • Set up and tear down an in-memory database in a predictable way
  • Run the whole mess with Maven without any special JUnit code

I’m going to talk a bit about the second and last points.

To run a JPA test, you’re going to need an EntityManager.  And to get an EntityManager in a non-EJB environment, you’re going to need a JPA provider on the classpath.

You basically have three JPA providers to choose from: EclipseLink, Hibernate and OpenJPA.  These are the ones that are in wide use today, so they’re the ones I’m going to focus on.  You want to be able to back up any claim you make that your JPA entities will run under these big three providers, so to do that you need to make sure you’re unit testing them.

We’d like our tests to exercise our entities using each of these in turn.  Further, we’d like our tests to be run in their own process, with an environment as similar to the end environment as possible, while not including any extra crap.

So to begin with, we’re going to have to get Surefire (Maven’s test runner plugin) to run three times in a row, with a different JPA provider each time.

The first time I attempted this, I thought I’d use the @Parameterized annotation that comes with JUnit.  This annotation lets you set up a test class so that JUnit will run it multiple times with different input data.  I had set it up so that the input data was a String that identified the persistence unit and the JPA provider.  This worked fine, but the multiple times your test runs are not each in their own process.  As a result, you end up having all three JPA providers on the classpath at the same time, and various problems can result.  The whole solution was rather brittle.

Instead, we want to have Maven control the test execution, not some looping construct inside JUnit.

The first insight I had was that you can set up a plugin in a pom.xml file to run several times in a row.  This is, of course, blindingly obvious once you see it (if you’re used to staring at Maven’s XML soup), but it took me a while to realize it’s possible.

Here, for example, is a way to configure the maven-surefire-plugin to run three times (with no other configuration):

<build>
  <plugins>
  <plugin>

    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.7.2</version>
    <configuration>
      <skip>true</skip>
    </configuration>
    <executions>
      <execution>
        <id>First Surefire run</id>
        <goals>
          <goal>test</goal>
        </goals>
        <phase>test</phase>
        <configuration>
          <skip>false</skip>
        </configuration>
      </execution>
      <execution>
        <id>Second Surefire run</id>
        <goals>
          <goal>test</goal>
        </goals>
        <phase>test</phase>
        <configuration>
          <skip>false</skip>
        </configuration>
      </execution>
      <execution>
        <id>Third Surefire run</id>
        <goals>
          <goal>test</goal>
        </goals>
        <phase>test</phase>
        <configuration>
          <skip>false</skip>
        </configuration>
      </execution>
    </executions>
  </plugin>
  <!– other plugins here –>
  </plugins>
<!– other build info –>
</build>

Run mvn test and you’ll see Surefire run three times.  (There’s actually a MUCH shorter way to accomplish this trivial three-run configuration, but it won’t aid our ultimate cause, so I’ve opted to stay verbose here.)

We’ve told Surefire that by default it should skip running.  Then we’ve provided it with three executions (identified with <id> elements so we can tell them apart).  Each execution is structured to run the test goal during the test phase, and tells Surefire that it should not skip.

It’s important to note that the <configuration> element, when contained in an <execution>, applies only to that <execution>, and overrides any related settings in the default <configuration> (housed immediately under the <plugin> element), if there is one.  We’re going to make heavy, heavy use of this fact.

So hopefully you can start to see that the looping of the test cases can be controlled by Maven.  We know that we’re going to run Surefire three times–one time for each JPA provider–and now we want to make sure that each iteration is in its own process.  Let’s enhance the default configuration of this skeletal block a bit:

<plugin>
  <artifactId>maven-surefire-plugin</artifactId>
  <configuration>
    <skip>true</skip>
    <forkMode>always</forkMode>
    <useFile>false</useFile>
  </configuration>

We’ve told Surefire to always fork, and (purely for convenience) told it to spit any errors to the screen, not to a separate file.  So this solves the test looping problem.  We have our skeleton.  On to part 2.