4 September 2009

Running JUnit in Parallel

Back in 2008 I had to speed up our daily build. (I should have posted about it since long, but I just didn't make it. Recently when I saw a related post on a similar topic my bad conscience overwhelmed me.) The first thing was to get a faster machine, something with four 3 GHz cores. It worked excellent! All file based operations like compile performed 3 times faster just out of the box, thanks to the included RAID 0+1 disk array. As our automated tests took half of the total build time, I dealt with them first: I applied the usual optimisations as told in my talk about practical JUnit testing, tuning section. So I managed to halve JUnit execution time.

ForkGood, but still not fast enough. The problem was how to utilise all the shiny new cores during one build to speed it up as much as possible. So test execution needed to run in parallel. Some commercial build servers promised to be able to spread build targets over several agents. Unfortunately I had no opportunity to check them out, they cost quite beyond my budget. The only free distributed JUnit runner I found was using ComputeFarm JINI in a research project which did not look mature enough for production usage. Worth mentioning is GridGain’s JunitSuiteAdapter. It's able to distribute JUnit tests across a cluster of nodes. GridGain is a free cloud implementation, it's really hot stuff. But it's not a build solution, so integrating it into the existing build would have been difficult.

As I did not find anything useful had to come up with a minimalist home grown solution. I started with a plain JUnit target junitSequential which ran all tests in sequence:
<target name="junitSequential">

<junit fork="yes" failureproperty="failed"
haltonfailure="false" forkmode="perBatch">

<classpath>
<fileset dir="${lib.dir}" includes="*.jar" />
<pathelement location="${classes.dir}" />
</classpath>

<batchtest>
<fileset dir="${classes.dir}"
includes="**/*Test.class" />
</batchtest>
</junit>

<fail message="JUnit test FAILED" if="failed" />

</target>
I used haltonfailure="false" to execute all tests regardless if some failded or not. Otherwise <batchtest> would stop after the first broken test. With failureproperty="failed" and <fail if="failed" /> the build still failed if necessary. There is nothing special here.

Ant is able to run tasks in parallel using the <parallel> tag. (See my related post about forking several Ant calls in parallel.) A parallel running target would look like
<target name="junitParallelIdea">
<parallel>
<antcall target="testSomeJUnit" />
<antcall target="testOtherJUnit" />
</parallel>
</target>
Good, but how to split the set of tests into Some and Other? My first idea was to separate them by their names, i.e. by the first letter of the test's class name, using the inclusion pattern **/${junit.letter}*Test.class in the <batchtest>'s fileset. So I got 26 groups of tests running in parallel.
<target name="junitParallelNamedGroups">
<parallel>

<antcall target="-junitForLetter">
<param name="junit.letter" value="A" />
</antcall>
<antcall target="-junitForLetter">
<param name="junit.letter" value="B" />
</antcall>
<antcall target="-junitForLetter">
<param name="junit.letter" value="C" />
</antcall>
<!-- continue with D until Z -->

</parallel>
</target>

<target name="-junitForLetter">

<junit fork="yes" forkmode="perBatch">

<!-- classpath as above -->

<batchtest>
<fileset dir="${classes.dir}"
includes="**/${junit.letter}*Test.class" />
</batchtest>
</junit>

</target>
forkmode="perBatch" created a new JVM for each group. Without forking each test class would get it's own class loader, filling up the perm space. Setting reloading="false" made things even worse. All those singletons started clashing even without considering race conditions. So I took the overhead of creating additional Java processes.

Streets of SplitUnfortunately the grouping by letter approach had some problems. First the number of threads needed to be specified with <parallel>'s threadsperprocessor or threadcount attribute, else there would be 26 parallel processes competing for four cores. My experiments showed that two threads per processor performed best for the given set of JUnit tests. (Those JUnit tests were not "strictly unit", some tests called the database or web services, freeing the CPU during blocking. For tests with very little IO it might have looked different.)

Also my haltonfailure approach did not work because <antcall> does not return any properties set inside the called -junitForLetter target. There was no Ant command that supported that. But AntCallBack of the Antelope Ant extensions was able to do the trick: After registering the custom task with name="antcallback" I replaced the plain <antcall>s with <antcallback target="..." return="failed">.

Separating JUnit test cases by their names produced unbalanced and therefore unpredictable results regarding overall execution time. Depending on naming conventions some groups would run much longer than others. Ant's Custom Selectors are a much better way to split a fileset into a given number of parts producing a few balanced filesets with roughly the same number of test classes.
import java.io.File;

import org.apache.tools.ant.BuildException;
import org.apache.tools.ant.types.Parameter;
import org.apache.tools.ant.types.selectors.BaseExtendSelector;

public class DividingSelector extends BaseExtendSelector {

private int counter;
/** Number of total parts to split. */
private int divisor;
/** Current part to accept. */
private int part;

public void setParameters(Parameter[] pParameters) {
super.setParameters(pParameters);
for (int j = 0; j < pParameters.length; j++) {
Parameter p = pParameters[j];
if ("divisor".equalsIgnoreCase(p.getName())) {
divisor = Integer.parseInt(p.getValue());
}
else if ("part".equalsIgnoreCase(p.getName())) {
part = Integer.parseInt(p.getValue());
}
else {
throw new BuildException("unknown " + p.getName());
}
}
}

public void verifySettings() {
super.verifySettings();
if (divisor <= 0 || part <= 0) {
throw new BuildException("part or divisor not set");
}
if (part > divisor) {
throw new BuildException("part must be <= divisor");
}
}

public boolean isSelected(File dir, String name, File path) {
counter = counter % divisor + 1;
return counter == part;
}
}
One of the four available cores was used for static code analysis, which was very CPU intensive and one was used for integration testing. The remaining two cores were dedicated to unit tests. Using 4 balanced groups of tests executing in parallel, the time spent for JUnit tests was halved again: Yippee
<target name="junitParallel4Groups">
<parallel threadcount="4">
<antcallback target="-junitForDivided" return="failed">
<param name="junit.division.total" value="4" />
<param name="junit.division.num" value="1" />
</antcallback>
<antcallback target="-junitForDivided" return="failed">
<param name="junit.division.total" value="4" />
<param name="junit.division.num" value="2" />
</antcallback>
<antcallback target="-junitForDivided" return="failed">
<param name="junit.division.total" value="4" />
<param name="junit.division.num" value="3" />
</antcallback>
<antcallback target="-junitForDivided" return="failed">
<param name="junit.division.total" value="4" />
<param name="junit.division.num" value="4" />
</antcallback>
</parallel>
<fail message="JUnit test FAILED" if="failed" />
</target>

<target name="-junitForDivided">

<junit fork="true" failureproperty="failed"
haltonfailure="false" forkmode="perBatch">

<!-- classpath as above -->

<batchtest>
<fileset dir="${classes.dir}">
<include name="**/*Test.class" />
<custom classname="DividingSelector" classpath="classes">
<param name="divisor" value="${junit.division.total}" />
<param name="part" value="${junit.division.num}" />
</custom>
</fileset>
</batchtest>

</junit>

</target>
(Download source code of DividingSelector.)

Epiloge
Using this approach I kept the option to execute the tests one after another with num=1 of total=1 providing an easy way to switch between normal and parallel execution. This was useful when debugging the build script...
<target name="junitSequential">
<antcallback target="-junitForDivided" return="failed">
<param name="junit.division.total" value="1" />
<param name="junit.division.num" value="1" />
</antcallback>
<fail message="JUnit test FAILED" if="failed" />
</target>

5 comments:

Eishay Smith said...

Pretty cool. A bit similar to may solution:
http://eng.kaching.com/2010/02/parallelizing-junit-test-runs.html
But still not fast enough. I'm looking for ways to make the build run in a distributed cluster.

Peter Kofler said...

You are right, distribution would be even faster. If you looking for test distribution, you might try GridGain, which is able to do that (http://www.gridgainsystems.com/wiki/display/GG15UG/Distributed+JUnit+Overview).

I don't know any free tools that would run your build in a cluster, but Hans Dockter of Cradle (http://www.gradle.org/) promised me that Cradle will have parallel and distributed builds in the near future.

ed said...

I had one even better idea. Using "foreach". Now spamming everywhere to make the world better place :)

http://solutionsdaily.blogspot.com/2010/06/making-junit-tests-run-parallely.html

Peter Kofler said...

I agree, your solution is quite elegant. Nice idea. I like it.

But what about forking? If you need tests forked you can't use fork="once" because each test is a batch test.

Forking might be necessary if you have many tests to avoid PermSpace issues. If you disable class loading for each test (and thus keeping the PermSpace from filling up), you might still run into problems with the much hated singletons.

ed said...

Yes, singletons is a problem. I have written more detailed answer there also.

I am not sure it the best you can do, but I liked very much how processes are being automatically balanced between threads without any additional work to do from my side.