6 December 2009

First Eclipse DemoCamp Vienna

It all started when my buddy and Eclipse Xtext committer Michael Clay told me about the Eclipse DemoCamp idea with so much enthusiasm that I agreed to help him organising the first Eclipse DemoCamp in Vienna.

DemoCamp 2009 Title We lined up with the local Java Student User Group and started inviting speakers. To our surprise there was absolutely no problem to find speakers and even sponsors. 200 emails later we ended up with 10 presentations and 8 sponsors (beside the Eclipse Foundation which sponsors all demo camps).

When Jeff and Chris of EclipseSource offered a presentation, we were delighted but had to drop our own talks about Free Quality and Code Metric Plugins and Xtext NG to provide some space for these well known Eclipse veterans. (So, as usually there was no time for code quality topics ;-) Interest of people to attend as well as give a speech was quite high. I'm sure we could have added a second track but did not want to because of Bjorn Freeman-Benson's recommendation for Eclipse X Days.

JSUG had organised the room and infrastructure for us. The enthusiasts of the core team helped the speakers prepare their stuff while I was giving them our Eclipse DemoCamp shirts. When the presentation started there were more than 80 people in the room. I had the pleasure to welcome them and present the agenda. Although I only had ten slides I was very nervous. It was the first time I spoke in front of that many people.

Werner Keil on STEM (© Markus Musil) Werner Keil started with an introduction of the Spatio-Temporal Epidemiological Modeler (STEM). He talked about its general application and importance. In the end he played an interesting video showing simulated infection rates throughout the world.

Robert Baumgartner on Lixto Visual Developer (© Markus Musil) Next the Lixto development team presented some features of their RCP based screen scraper and how they used JFace components to render the different GUIs of the Lixto Visual Developer application.

Christoph Mayerhofer on ReviewClipse (© Markus Musil) The Eclipse plugin ReviewClipse was presented by Christoph Mayerhofer. It's a useful tool to make code reviews easier and thus more likely to be done. Obviously a code cop has to love it. Currently I'm reading all diffs from junior developers every morning. So I'll give it a try.

Chris Aniszczyk and Jeff McAffer on Toast (© Markus Musil) Chris Aniszczyk and Jeff McAffer are both seasoned speakers and furthermore good entertainers. In a whirlwind tour of the Toast demo application they showed what you can do with EclipseRT technologies. Believe me, it's cool stuff.

Tom Schindl on e4 (© Markus Musil) The last presentation of the first block was about e4, the future platform of Eclipse, given by well known Eclipse committer Tom Schindl. Tom is probably one of the most motivated Eclipse enthusiasts. You can feel the fire burning in him when he's talking about his work, very stimulating. And he hates singletons as much as I do. Good boy.

Half an hour planned break was way too short to eat 400 sandwiches together with water, Red Bull and smoothies. Michael even went to fetch some beer because Chris had written that he is looking forward to Austria and the Austrian Stiegl Bier. People were enjoying the sandwiches, standing together in small groups and chatting away. Unfortunately I didn't have time to talk to everybody I knew.

Robert Handschmann on Serapis (© Markus Musil) Our "modelling track" was opened by Robert Handschmann's demo of the Serapis language workbench. It looked mature and makes model driven development quite easy. I liked most that Robert was able to answer all questions regarding additional features by simply showing the feature in the workbench.

Maximilian Weißböck on Xtext (© Markus Musil) After that Maximilian Weißböck explained the basics of Model Driven Software Development and showed how easy it's to add stuff when you have a working modelling solution. He finished with the advice to always use a modelling approach because the tools (read Xtext) are mature enough to pay off even for small projects.

Karl Hönninger on OpenXMA (© Markus Musil) Next Karl Hönninger gave a short demo of openXMA, a RIA technology based on EMF and SWT. Now XMA is not your 'new and sexy' technology, but it's 'improving'. The new openXMA DSL is based on Xtext and combines domain and presentation layer modelling. Karl used a series of one minute screencasts to demo the XMA Eclipse tool chain. That's a good idea to make sure that the demos work and still to be flexible enough to skip parts of the demo.

Florian Pirchner on Redview (© Markus Musil) Then Florian Pirchner showed how they had created dynamic RCP Views enriched with Riena Ridgets that interpret their EMF model and update in real-time: Riena-EMF-Dynamic-Views. Redview seemed like magic to me (either that or I was already getting tired ;-).

Philip Langer on Model Refactorings (© Markus Musil) Unfortunately I was not able to attend the last presentation given by Philip Langer about model refactoring by example. Michael and I were busy preparing to move on to the chosen "beer place". In the end approx. 30 people made it there. In the nice atmosphere of our own room we had some beer and vivid discussions. For example I have some (blurred) memories of Michael showing slides of some Xtext presentation on his notebook. The evening ended when the waiter threw us out half an hour past closing time (midnight).

Organising a demo camp is work. But it's also fun. And obviously it paid off. It was great. Thank you everybody for making it such a nice evening. Maybe we'll see each other again next year.

9 November 2009

Java is So Old-School

This is going to earn me some flames - but wait for my explanation. The demise of Java has been discussed again and again since some time and here is its proof: Yesterday I visited a jumble market organised by the local Scout group. They offered many things for charity and had them well sorted. When browsing through their stock of cups, I found this little one:

Java Cup bought at jumble sale
Well, Java really has to be old-school if its cups are sold on jumble markets ;-)

3 November 2009

I am 1337!

Yesterday Andreas mailed me this screen shot from my Stack Overflow profile:

together with this explanation "elite => eleet => e1337 => 1337". Aha, this proves that I'm elite. Well - maybe - I don't know. But nevertheless thank you Andreas for feeding my delusions of grandeur. ;-)

6 October 2009

Coder's Dread

Bug of the DayYesterday I found a bug.
I tried to fix it but got stuck.
A pointer had gone completely mad,
had lost its reference, really bad.

Such problems are hard to track,
so many things one has to check.
I read the code, each single line.
A cold shiver ran up my spine.

A particular piece of code was really old,
not changed for years, truth (to) be told.
Its layout was a mess indeed.
"Code Format" was what I need(ed).

I looked a little bit around
and finally the bug was found:
Inside a method called "has Lock"
- an evil empty catch block.

Having found the cause of trouble,
I removed this piece of rubble,
extinguished every tainted line.
It had no chance, victory was mine.

30 September 2009

Slacking Off

... or why automatic checks are necessary.

Human Factors
I must confess, I'm a slacker. For example I've been writing this post for three months and still haven't finished. I skip my workouts again and again. More important things just pop up all the time. Concepts like interesting or important are subjective and priorities are likely to change between individuals and over time. So everybody has his or her sweet spot of slacking. It's impossible (and probably also unwise) to work hard on all aspects of life. When everything runs smoothly, people get sloppy. (Again that might be something good for boring, repetitive tasks - except when a surgeon performs his 1.000th appendix extraction.) When things work out great, we might even get delusions of grandeur and bathe in the glow of our own greatness. Everybody does it, you do it, I do it. Only Chuck Norris does not.

Hmm. I'm mixing different behaviours here: slacking, sloppiness, laziness, lack in motivation, doing things half-hearted, leaving things unfinished. I use all these words synonymously. I Know that's not entirely correct. (Probably that's the reason I can't get this post into proper shape. I've already rewritten it five times. I know that I must not ship shit but I'm getting tired. So I will have to live with it. I'm sloppy myself ;-)

SlothThere are several causes for these factors, e.g. lack of interest (I don't care), boredom (I do it the hundredth time), distraction (I'm not able to concentrate on it - I just love cubicle spaces.), lack of background information (why do I do this crap), fear of wasted effort (I might not need it later) and time pressure (I have no time to do it proper).

Oh My!
What implications do these factors have for code quality? (By code quality I mean the internal code quality, maintained by the developer day after day.) Consider a product 'A'. Features have been added to it for the last five years. The natural laziness of all developers has taken its toll. The code is a mess. Maintenance costs go up. Suddenly code quality gets important. Suddenly management is interested in coding conventions and development processes. Suddenly people are aware of a need for an architecture. Suddenly people want to stop slackerism. But when the product is in trouble it's too late. Not really too late, as software is soft and can be changed all the time, it's just much more expensive. All these things are not new. It's well known that software erodes over time. Slacking developers may just be one of its causes.

Check What?
After this lengthy introduction I prepared my point: The need for automatic checks. Checks are good for you. (Like daily sit-ups.) Do them. Even better, set them up so you don't have to do them yourself. (Somebody does all the sit-ups for you. Every day. Isn't it great ;-) Remember: if it's not checked, it's not there. Paper is patient, automatic checks are not. Really, make your checks and reviews automatic. It's important, like your daily vitamins.

Automated testing is only one aspect of checking your code, albeit the most popular one. The test infected community already knows that if it's not tested, it's broken.. So next to testing you need to check other aspects of your code, like coding conventions. Usually these include whitespace policy, formatting, naming and other design idioms. Coding conventions cover a much broader area than most people think. They are not only about naming. They are also about higher level boiler plate code, e.g. how to handle transactions, how to access the database, how to log, how to handle exceptions, etc. These things are project specific and depend on the overall architecture.

Slacker Vandalism? End Work Check It!
All projects have some sort of coding conventions. But are they complete? Are they documented? Do developers comply with them? Unlikely. They need to be documented and even more need to be checked automatically. Probably most of your rules are not checked. It's time to write them down and define some concrete checks for them. Most tools and even some IDEs ship with basic rules for simple things like whitespace, naming or common coding idioms. These are perfect for a start. Start small. Use a few rules. You can always add more later. The limit of what you can check depends entirely on your determination: design rules, layering, modularity, architecture, code coverage and documentation and much more.

The problem is that rule enforcement provokes opposition. People don't want to leave their cosy comfort zone. Discussing and agreeing on a new coding convention is not a problem. But adding a new rule to already checked coding convention might be a fight. You have to convince developers to accept it. You have to argue with management for time to remove rule violations in legacy code. You have to struggle through, especially when you're only a grunt. Small steps are crucial. Don't press on it too much. If there is opposition, offer to drop the new rule. Make it look like there is the option of not having it. This enables discussion. (Of course that's not an option and you are not really offering it, but people like to have options to discuss about.) As soon as some rules showed their value, developers will vote for them if you oppose them, be the devil's advocate.

So let's finish this rant about human nature. I'm a slacker. Most likely there are some more in our trade. We must accept that. we are lazy. We make mistakes. Sometimes we are weak. That's normal. We just have to be aware of it. So be paranoid. Don't trust anyone. Automate anything that you might screw up. (Robustness #2) Automatic checks are your safety net. They help you avoiding making the same mistake twice. If there is a bug in your code, create a unit test to ensure the bug stays fixed. If you have inconsistent formatting, add format checks to your daily build. If you notice wrong usage of a design idiom during a review, create a custom rule to enforce proper usage. If ... well you get the point.

All this leads to the 2nd Law of Code Quality - Automatic Checks to fight slackerism.

4 September 2009

Running JUnit in Parallel

Back in 2008 I had to speed up our daily build. (I should have posted about it since long, but I just didn't make it. Recently when I saw a related post on a similar topic my bad conscience overwhelmed me.) The first thing was to get a faster machine, something with four 3 GHz cores. It worked excellent! All file based operations like compile performed 3 times faster just out of the box, thanks to the included RAID 0+1 disk array. As our automated tests took half of the total build time, I dealt with them first: I applied the usual optimisations as told in my talk about practical JUnit testing, tuning section. So I managed to halve JUnit execution time.

ForkGood, but still not fast enough. The problem was how to utilise all the shiny new cores during one build to speed it up as much as possible. So test execution needed to run in parallel. Some commercial build servers promised to be able to spread build targets over several agents. Unfortunately I had no opportunity to check them out, they cost quite beyond my budget. The only free distributed JUnit runner I found was using ComputeFarm JINI in a research project which did not look mature enough for production usage. Worth mentioning is GridGain’s JunitSuiteAdapter. It's able to distribute JUnit tests across a cluster of nodes. GridGain is a free cloud implementation, it's really hot stuff. But it's not a build solution, so integrating it into the existing build would have been difficult.

As I did not find anything useful had to come up with a minimalist home grown solution. I started with a plain JUnit target junitSequential which ran all tests in sequence:
<target name="junitSequential">

<junit fork="yes" failureproperty="failed"
haltonfailure="false" forkmode="perBatch">

<fileset dir="${lib.dir}" includes="*.jar" />
<pathelement location="${classes.dir}" />

<fileset dir="${classes.dir}"
includes="**/*Test.class" />

<fail message="JUnit test FAILED" if="failed" />

I used haltonfailure="false" to execute all tests regardless if some failded or not. Otherwise <batchtest> would stop after the first broken test. With failureproperty="failed" and <fail if="failed" /> the build still failed if necessary. There is nothing special here.

Ant is able to run tasks in parallel using the <parallel> tag. (See my related post about forking several Ant calls in parallel.) A parallel running target would look like
<target name="junitParallelIdea">
<antcall target="testSomeJUnit" />
<antcall target="testOtherJUnit" />
Good, but how to split the set of tests into Some and Other? My first idea was to separate them by their names, i.e. by the first letter of the test's class name, using the inclusion pattern **/${junit.letter}*Test.class in the <batchtest>'s fileset. So I got 26 groups of tests running in parallel.
<target name="junitParallelNamedGroups">

<antcall target="-junitForLetter">
<param name="junit.letter" value="A" />
<antcall target="-junitForLetter">
<param name="junit.letter" value="B" />
<antcall target="-junitForLetter">
<param name="junit.letter" value="C" />
<!-- continue with D until Z -->


<target name="-junitForLetter">

<junit fork="yes" forkmode="perBatch">

<!-- classpath as above -->

<fileset dir="${classes.dir}"
includes="**/${junit.letter}*Test.class" />

forkmode="perBatch" created a new JVM for each group. Without forking each test class would get it's own class loader, filling up the perm space. Setting reloading="false" made things even worse. All those singletons started clashing even without considering race conditions. So I took the overhead of creating additional Java processes.

Streets of SplitUnfortunately the grouping by letter approach had some problems. First the number of threads needed to be specified with <parallel>'s threadsperprocessor or threadcount attribute, else there would be 26 parallel processes competing for four cores. My experiments showed that two threads per processor performed best for the given set of JUnit tests. (Those JUnit tests were not "strictly unit", some tests called the database or web services, freeing the CPU during blocking. For tests with very little IO it might have looked different.)

Also my haltonfailure approach did not work because <antcall> does not return any properties set inside the called -junitForLetter target. There was no Ant command that supported that. But AntCallBack of the Antelope Ant extensions was able to do the trick: After registering the custom task with name="antcallback" I replaced the plain <antcall>s with <antcallback target="..." return="failed">.

Separating JUnit test cases by their names produced unbalanced and therefore unpredictable results regarding overall execution time. Depending on naming conventions some groups would run much longer than others. Ant's Custom Selectors are a much better way to split a fileset into a given number of parts producing a few balanced filesets with roughly the same number of test classes.
import java.io.File;

import org.apache.tools.ant.BuildException;
import org.apache.tools.ant.types.Parameter;
import org.apache.tools.ant.types.selectors.BaseExtendSelector;

public class DividingSelector extends BaseExtendSelector {

private int counter;
/** Number of total parts to split. */
private int divisor;
/** Current part to accept. */
private int part;

public void setParameters(Parameter[] pParameters) {
for (int j = 0; j < pParameters.length; j++) {
Parameter p = pParameters[j];
if ("divisor".equalsIgnoreCase(p.getName())) {
divisor = Integer.parseInt(p.getValue());
else if ("part".equalsIgnoreCase(p.getName())) {
part = Integer.parseInt(p.getValue());
else {
throw new BuildException("unknown " + p.getName());

public void verifySettings() {
if (divisor <= 0 || part <= 0) {
throw new BuildException("part or divisor not set");
if (part > divisor) {
throw new BuildException("part must be <= divisor");

public boolean isSelected(File dir, String name, File path) {
counter = counter % divisor + 1;
return counter == part;
One of the four available cores was used for static code analysis, which was very CPU intensive and one was used for integration testing. The remaining two cores were dedicated to unit tests. Using 4 balanced groups of tests executing in parallel, the time spent for JUnit tests was halved again: Yippee
<target name="junitParallel4Groups">
<parallel threadcount="4">
<antcallback target="-junitForDivided" return="failed">
<param name="junit.division.total" value="4" />
<param name="junit.division.num" value="1" />
<antcallback target="-junitForDivided" return="failed">
<param name="junit.division.total" value="4" />
<param name="junit.division.num" value="2" />
<antcallback target="-junitForDivided" return="failed">
<param name="junit.division.total" value="4" />
<param name="junit.division.num" value="3" />
<antcallback target="-junitForDivided" return="failed">
<param name="junit.division.total" value="4" />
<param name="junit.division.num" value="4" />
<fail message="JUnit test FAILED" if="failed" />

<target name="-junitForDivided">

<junit fork="true" failureproperty="failed"
haltonfailure="false" forkmode="perBatch">

<!-- classpath as above -->

<fileset dir="${classes.dir}">
<include name="**/*Test.class" />
<custom classname="DividingSelector" classpath="classes">
<param name="divisor" value="${junit.division.total}" />
<param name="part" value="${junit.division.num}" />


(Download source code of DividingSelector.)

Using this approach I kept the option to execute the tests one after another with num=1 of total=1 providing an easy way to switch between normal and parallel execution. This was useful when debugging the build script...
<target name="junitSequential">
<antcallback target="-junitForDivided" return="failed">
<param name="junit.division.total" value="1" />
<param name="junit.division.num" value="1" />
<fail message="JUnit test FAILED" if="failed" />

15 August 2009

Type Parameters and Reflection

Last week I had an opportunity to participate in the development of XMAdsl, an Xtext based model driven development extension of openXMA. In the new release we want to use Value Objects for all relevant values. E.g. the birth date of a person should not be an arbitrary java.util.Date, but a specific BirthDateValue instance. The Value Object hierarchy looks like that:
abstract class ValueObject<T> {
private final T value;
public ValueObject(T pValue) {
value = pValue;
// shortened for brevity...

class ValueObjectDate extends ValueObject<Date> {
public ValueObjectDate(Date pDate) {
super((Date) pDate.clone()); // Date is mutable
// shortened for brevity...

class BirthDateValue extends ValueObjectDate {
public BirthDateValue(Date pValue) {
ValueObject, ValueObjectDate and all other Value Object base classes are packaged in the platform used by the generator. BirthDateValue and all other concrete subclasses are generated depending on the model. Such method signatures are more readable and passing a date of payment as a birth date by accident gets impossible. (Value Objects have other advantages, e.g. being read only, but that's not the point here.)

Reflection PoolAll Value Objects use the ValueObjectType Hibernate User Type for persistence. Obviously the User Type has to know the type of the Value Object to create new instances and the type of the value inside it to map it onto the database column. But how to get the inner type? Our first approach was to configure it. As the Hibernate configuration is generated as well, that's no big deal. Nevertheless it's a bit lame and definitely not DRY.

So the question is how to find the type parameter of a generic super class. In the given case the inner value is passed as constructor argument, so we can easily do better using reflection on the constructor parameter:
for (Constructor<?> c : pValue.getConstructors()) {
if (c.getParameterTypes().length == 1) {
return c.getParameterTypes()[0];
// throw an exception
Nice. That fixed our problem but made me think further. What if there would would be no method or field with the concrete type of the type parameter, e.g. when using collections as super classes and not defining any new methods? It's also possible to find the type using reflection:
public Class<?> findType(Class<?> pValue) {
if (pValue == Object.class) {
// throw an exception

// is it generic itself?
if (pValue.getTypeParameters().length > 0) {
return (Class<?>) pValue.getTypeParameters()[0].getBounds()[0];

// is the super class typed?
if (pValue.getSuperclass().getTypeParameters().length > 0) {
Type superClass = pValue.getGenericSuperclass();
ParameterizedType type = (ParameterizedType) superClass;
return (Class<?>) type.getActualTypeArguments()[0];

// go up to super class
return findType(pValue.getSuperclass());
A class knows if it's super class had type parameters using getGenericSuperclass(). If the class has a type parameter itself we don't know it due to erasure, but at least we get its lower bound. In all other cases we go up the class hierarchy. Finally a little test if it works as expected...
public void testFindType() {

18 July 2009

Holiday Blues

Holidays in Greece: Chilling out in the sun, spending time with the family, having some rest and much free time. Time to think about my job, my life and the whole universe ;-) However too much thinking makes me sad...

Greek Beach It has been more than half a year now that I have left Herold for good. In the end work there got quite boring and some things really sucked. But after staying there for five years the development team became my family. My bond with my colleagues kept me from leaving earlier and made it a really hard decision when the time for something new had finally come. I still remember them and from time to time I suffer moments of painful memories. I do not know how it came to be like that. There were no activities together, no common hobbies to talk about, even no geek interests to compete in. We did not meet after work or go out for a beer together, at least not more than once or twice a year.

Nevertheless I am thinking of them: Alex making his acrid remarks; Andreas refreshing me with new ideas, that always seemed to be much too revolutionary; Anton, although we hardly talked to each other; Ben making us play network games after work; Claudia my "soul-sister"; Claudio, exchanging stories about our misbehaving sons during lunch break; Dominik, always calm and relaxed; Martina having much too much energy; Petra; Richard testing a new database tool each week; Ronnie; Sylvi; Tim making me laugh about all his carpenter jokes; and Vero, the caring soul of the team, although she could really be mean sometimes.

My dear colleagues I miss you.

3 June 2009

Practical Unit Testing

In the beginning of June I gave a presentation about practical unit testing with JUnit at the Java Student User Group in Vienna. JSUG is a small group of dedicated students that formed last year. The scope was practical as well as pragmatic - in fact just bits of information I considered useful for the daily development. It was good to see young developers that are eager to write tests. I hope when they are thrown into legacy code later in their career that their principles will not just crumble.

Download the Practical Unit Testing presentation slides.

Projector: Capitol Theatre in WestbankResources used in the slides:
This section contains links to topics I talked about. I give a list of all my sources. Hopefully Google does not punish me for creating this link-farm like page ;-)

JUnit basics used sources from Unit testing (Wikipedia), JUnit.org & JUnit, Early look at JUnit 4, Design to Unit Test, Test-driven design, Part 1 & Part 2; Checkstyle, FindBugs and PMD.

The mocking chapter used sources from jMock, EasyMock (Easier testing with EasyMock), Initializing bean using EasyMock, Mocking & Spring, Oh no, we're testing the Mock! and the Law Of Demeter.

Singletons are a pain for testing: Patterns I Hate, Singletons Are Evil and Why Singletons are Evil, Refactor singleton, Test flexibly with AspectJ and mock objects with AspectJ and the Google Singleton Detector.

There are some tools to help testing J2EE apps: HtmlUnit/HttpUnit, HttpClient, Testing Servlets and ServletUnit, Jetty, Cactus, Simple-JNDI, MockEJB and ActiveMQ.

Tuning the tests means often tuning the database: DbUnit, H2 and HSQLDB.

The Code Coverage chapter talks about EMMA (EclEmma), Cobertura, Agitar but Don't be fooled by the coverage report, Crap4j, Testability Explorer.

A new trend is testing with scripts: Unit test your Java code faster with Groovy or Using JRuby for Java testing, RSpec, JRuby, JtestR and ScalaCheck/Specs.

Finally some cool tools for testing and the build are JUnitPerf, SWTBot, XmlUnit, Distributed JUnit and GridGain.

29 May 2009

Public Relations

It seems that I am particularly unlucky in finding support for my interest in code quality. I can't help ranting so I will tell you this little story.

Some years ago, shortly after my "career" as code cop began, I wrote a series of articles about code quality and daily build techniques for a Java magazine (which I enjoy to do from time to time). My boss knew about it, but was not interested in it and I wrote the articles in my free time. (As you might have noticed I am not a gifted writer and filling all the gaps between facts and source code took the best part of most weekends during that year.) The first part was published soon after that.

Depressing DayAfter some time the second part of the series was published. This time the head of the IT department (the boss of my boss) learned about it and was impressed. He thought I had done it during work hours, or at least should have. I was happy and thankful for his appreciation and of course did not object to being paid for the time already spent. But my boss did not like the idea. He kept saying that we (the company I was with back then) were not an IT providing company and did not need to show any technical expertise or qualification to the outside world. However as he was forced to pay me for those hours, a rather difficult time began. In the end I regretted having shown the article to anyone.

In fact the whole story depressed me a lot and made me accept an offer from another company half a year later. There, during my interview I was promised support for writing technical articles. Later when a new article was due I wanted to call on that. It turned out that neither a budget nor a process was available for such activities so the support melted down to being allowed to write in my free time. I was pissed but in the end couldn't blame my boss. It was entirely my fault. Due to my lack of bargaining skills I hadn't pinned down the promised support to something concrete. (Technically my boss had not lied to me.) So from this time on I did neither ask nor expect any support. If they didn't want it, they should have said it straight at the first place.

But that was not the end...
Several months later I attended a department meeting on improving internal education, knowledge transfer and being more professional in general. It turned out that the head of the department (the boss of my boss) wanted more activities seen by the general public and that they had problems fulfilling the request of upper management to place an article in internal journals from time to time. Imagine my confusion when I heard that.

Yesterday a colleague brought a small, local conference on software quality to my attention (shame on me I hadn't known it before) and proposed that I should try to submit a talk. Again I tried to get some support for it. I am able to prepare stuff at home (at the weekends) but I can't afford taking days off for giving a talk. This time I contacted the boss of my boss directly because I remembered his (feigned?) interest in public relations. Unfortunately he delegated the decision back to my boss. As we already know, she does not deem public actions necessary. :-( My colleague was surprised, too. We remembered the things being discussed during the departmental meeting and both had thought that management wanted more public relations. (Otherwise I would have started questioning my senses.)

Depressed Wander
What is going on here? Are they all dreamers, talking about things they would like to have knowing they can never afford them? Or is it just another form of the suppressing code quality because we don't have any-syndrom? (In full name I call this the suppressing code quality topics because we know that we do not have any quality and I will write about it later.) I don't know. But I do know that it was the last time they tricked me. (They've already tricked me twice, shame on me.)

2 May 2009

Fragments of cool code

From time to time I stumble over a piece of code, that just looks "cool". For example date literals in Ruby:
class Fixnum
def /(part)
[self, part]

class Array
def /(final_part)
Time.mktime(final_part, self[1], self[0])

13/11/2008 # => Thu Nov 13 00:00:00 +0100 2008
or a proper name for a throw:
catch (Exception up) {
// log the error
throw up;
The best name for a JUnit 4 test fixture (free to choose) I ever saw was in a presentation given by Peter Petrov:
public void doBeforeEachTest() {
A good name for a Ruby binding was
module Regenerated
Inside = binding
eval(script, Regenerated::Inside) where the evaluation is done within the scope of another module Regenerated which acts as a namespace.

22 April 2009

About the Code Cop

The Big IdeaThree weeks ago I attended the eJug Days in Vienna. It was a little conference with some nice presentations. I hadn't been to a conference for some time and was highly motivated. At the end of one particularly cool talk, when the speaker showed his last slide containing the address of his personal web site, a thought hit me. "Man", I thought, "I should have some nice web page with all my stuff put together in one place." This should save my job from going to India ;-) This was the beginning of Code Cop dot org.

I am a senior developer and have been working with Java and internet related technologies since 1999. And I am a completionist. I like my code being in order, e.g. nicely formatted, readable, proper named, designed, tested etc. In fact I am fanatic about it, sometimes even forced to keep it neat. For example a colleague calls me to his place to discuss some problem and I spot some minor flaw in the code on the screen, e.g. he wrote static public instead of public static. I'm getting mad. I am unable to listen to him till the flaw is fixed. I can tell you it's a vice. (And rumours are that there are colleagues who write such crap on purpose, just to tease me.) I don't know if it's the impact of my Virgo ascendant or the beginning of some weird mental disorder. But I do know that studying Mathematics and doing research work did not help to be less freaky.

Code Cop T-shirt reading Hard KoR Code CopI started working on code quality in 2004. While staying with Herold Business Data I was responsible for the code quality of the Online Services, running the Daily Build with Apache Ant and loads of tools. There, after years of harassing my dear colleagues with code and daily build issues, I was officially appointed "Code Cop" in 2006 (and even called a "Code Nazi" once). Later I published some articles about being a Code Cop :-). Now I moved all my QA related stuff from different sources here and will add all things I have not published yet.

Like Why the Lucky Stiff I'm an "aspiring author with no true achievements under my belt". I am interested in code quality tools, code generation techniques and recently in dynamic languages like Ruby or Scala. I have a PhD in mathematical computer science (a mixture of Applied Mathematics and Computer Sciences) from the university of technology in Vienna. I live in Austria.

Scott Hanselman gives advice to license your blog: All the content provided on code-cop.org is Creative Commons Attribution 3.0 licensed (short CC 3.0 BY), except noted otherwise. That says you can share or remix the work as long as you attribute the original work to me. Note that some images might have a different licence. For code I usually use the New BSD License.

It's time to thank some people for their help: Christoph Kober (Polychrom) for the cool Code Cop logo you see above on the right. Back in grammar school he already made the fanciest drawings. Claudia Gillmeier (Aurian) for her help with the design and layout. Stefan Nestelbacher designed my first T-shirt back in 2006. Further thanks to Douglas Bowman (Minima Stretch template), Mike Samuel (Prettify), phydeaux3 (Tag Cloud) and all the folks at Flickr for releasing their images under the creative commons licence. I just love your images. Keep going! Last but not least I want to thank my significant other Kasia for proofreading and supporting me in setting up this site.

5 April 2009

Sony Ericsson Theme File Format

Recently I wanted to pimp my Sony Ericsson mobile and add some nice themes (.thm files). There were plenty of them around and I just downloaded a collection of some hundred free ones. Unfortunately most came without a preview image and going through these on my mobile to find nice ones was painstaking and slow: I had to activate each of them, return to the standby mode, check out the main image, return to the theme manager, and start over again. What I needed was some kind of theme browser. I did not trust the Sony Ericsson Themes Creator application to provide browsing and did not bother installing it.

Sony Ericsson G705 - Keypad As being a hard core developer I had a look into the THM files. I STFW but could not find any resources explaining the file format. So I explored it a bit myself. Here is what I found together with some Ruby code.

Sony Ericsson Theme File (.THM) Format
The file contains several files concatenated together with some kind of header before the data.
  • At 0x0000 a null-terminated string with the name of the original file. Remaining bytes are all null. This is read with name = data[ofs..ofs+0x63].strip.
  • At 0x0064 several null-or-blank-terminated strings containing octal numbers. The number fields may contain blanks which are treated as separators as well. Parse them with
    numbers = data[ofs+0x64..ofs+0x100].
      collect { |n| n.to_i(8) }
  • The only important number is the fourth (body_len = numbers[3]), which is the length of the data body in bytes. If this number is not positive then the file is corrupt.
  • At 0x0101 there is most of the time the string ustar and at 0x0129 nogroup, but we do not need them.
  • At 0x0200 starts the data: body = data[ofs+0x200...ofs+0x200+body_len].
  • After 0x0200+body_len the space is filled with nulls until the next address continuing a zero low-byte. We just skip them until data[ofs] != 0.
  • Repeat these steps until EOF.
Usually the first item in the theme file is called Theme.xml. This is an XML configuration file and looks something like
<Sony_Ericsson_theme version="4.5">
  <Background Color="0xb5f8fd"/>
  <Background_image Source="desktop.png"/>
  <Desktop Color="0xb5f8fd"/>
Sometimes it's not called in this way, but any XML file will do. The Color looks like a regular HTML Colour Code. The Source is a reference to an image stored inside the theme file. The important image types are
  • Standby_image - main image.
  • Desktop_image - background in the menu.
  • Popup_image - background of popup.
  • There are a lot more, but often these three images are reused in smaller themes.
Knowing the standby image name, it is possible to extract it to the file-system (File.open(filename,'wb') {|io| io.write body}). Then browsing these preview images with some image utility and deleting unwanted ones is a piece of cake. The whole Ruby program is here. Have fun!

27 March 2009

Soft Skills

In our department there was a small team of three people, organising internal training, maintaining the library etc. One of them left and later in a departmental meeting a discussion took place about replacing the missing member. The remaining two guys said that they already have difficulties managing and synchronising themselves and a third person would not help and is therefore not needed. Well, it sounds reasonable so far.

Stop Some days later I was asked by my boss if I would like to join the team, which I denied, because as I had earlier understood, they did not want a third person, even if management thought they needed. (Well maybe it's from the management "idea" that having even two employees working on some topic is risky, because both might decide to leave the company - but three might as well...)

Afterwards I talked to the head of this "education" team. I had denied not of disrespect for their work but because I knew that they didn't want a third person. I was surprised to learn that they had already found a suitable replacement for the missing third member. They were so happy that someone they liked had volunteered. Huh?

Did I get something wrong in the first place, or did they talk unclear, on purpose, so they could choose their third member themselves? The whole story is just strange. Obviously I lack some soft skills in communication...

23 March 2009

Resources to Start Scala

The Scala Programming Language has been around some time and started getting popular in 2007. A good place to start are the tutorials included in the distribution, which are also available online: A Scala Tutorial (very short 15 pages), Scala By Example (already 145 pages) and The Scala Language Specification (full 180 pages).

Early multimedia resources that picked my interest were Martin Odersky's talk, The Scala Experience at JavaOne 2007 and the 62th episode from Software Engineering Radio. Further Martin Odersky gave another talk at JavaOne 2008 and JAX'08.

In January 2008 Ted Neward began his busy Java developer's guide to Scala, which started with stuff from the Scala Tutorial but went into greater detail later. Another blogger that is definitely worth mentioning is Daniel Spiewak, who wrote the nice Scala for Java Refugees as well as on some special topics like Integrating Scala into JRuby. Another piece worth recommending is Dean Wampler's blog titled the The Seductions of Scala. James Iry has to be mentioned for exploring more theoretical stuff in nice little chunks.

After spending some time with Scala, I went for the only book available, Programming in Scala, a comprehensive step-by-step guide with massive 754 pages. Unfortunately it did not ship for almost 9 (!) months and I do not like ebooks. (I know - I should have read the whole page when ordering; that it was not printed, not even finished back then.) However, now it's printed. Since January it's standing on my book shelf and torturing my conscience.

I have to pull myself together and finally start reading it!

21 March 2009

Slimming VMware Player Installation

Going Virtual @ Fiddler's Green Recently I had to play around with some VMware images. After installing the VMware Player (current version = 2.5.1) I had problems with connecting my image to the network. Finally I managed, but I learned it the hard way. IMHO the best way to run a virtual network is bridged using a static IP address for each image. (Just make sure the addresses are all in the same sub-net.) This description applies to Windows operation systems.

Network Connections
VMware Player installs two additional network adapters on the host computer.
  • Virtual switch VNNet1 is the default for Host Only networking. Using this network adapter virtual machines can't access the outside network. - Disable it, we don't need it!
  • Virtual switch VNNet8 is the NAT switch. Here virtual machines can access the outside network using the host’s IP address. - Disable it, we don't need it!
  • Just make sure the VMware Bridge Protocol is enabled for your main network adapter. So virtual machines can access the outside network using their own IP addresses.

VMware Bridge Protocol is enabled

I noticed that VMware Player also installs several services which I don't like. Especially when the purpose of some VMware services is unclear.
  • vmnat.exe is the NAT service. It is needed for VNNet8. Deactivate it, we don't need it!
  • vmnetdhcp.exe is a virtual DHCP service. It's only needed if you use DHCP in your images and do not have a real DHCP server set up and running. Deactivate it, we don't need it for static IP addresses.
  • vmware-authd.exe is the authorisation service and controls authentication and authorisation for access to virtual machines for non admin users. Probably you don't need it, so deactivate it.
  • vmware-ufad.exe is the host process for Ufa Services. It's not active by default, so leave it deactivated.
  • vmware-tray.exe is probably the same as hqtray.exe, but it was not installed on my computer.
  • hqtray.exe is the host network access status tray. Unless you want to see network traffic in the taskbar, deactivate it. (This is not a regular service, it is started at system start-up. You have to delete it from the registry, it's key is below HKEY_LOCAL_MACHINE\ SOFTWARE\ Microsoft\ Windows\ CurrentVersion\ Run.
If you have to use Host Only or NAT, leave the corresponding network and service settings alone.

And watch your back, you can't connect from the virtual image anywhere when your local firewall is dropping all unknown traffic ;-)

14 March 2009

Law of Code Quality: Consistency

Fall MixtureImagine you have some MDBs (Message Driven Beans). You want to get rid of them because they are still EJB2 and they suck. You want to use Spring's JMS capabilities instead. Sounds quite good. But after some time, due to some budget problem or pending dead-line, you stop converting the old stuff "because it does not add value" (and which is almost guaranteed to happen). So you end up having MDB plus Spring JMS mixed throughout the code and maintenance people have to know both of them. It's difficult enough to know one of them, but now you end up having both solutions messed up on the long run.

Copy & Paste
In brownfield development you always have some code before you start. The look, quality and design of this code is very important for further developments. Extending complex legacy stuff often involves a copy-and-paste style of coding. (If this is preferable is a story of its own right but will not be discussed here.) In the context of maintenance I consider copy and paste a good habit because this way the existing conventions and patterns are obeyed, even if they are not documented. Of course the positive effects of copy paste are only achieved if the "right" piece of the system is copied. Like reference solutions in generator development, aka templates (Link MDSD Generator), the copied piece has to be of highest quality, i.e. according to conventions and guidelines defined for the application.

Implicit Conventions and Uniformity
If the guidelines and conventions of some software are not written down properly, the only documentation is the code itself. Even if there is decent documentation, capturing all aspects of software development is at best very difficult or can't be done at all. There are always some implicit conventions that are only available in the code. The more an application is uniformly satisfying these conventions and designs, the more pieces of it serve as "safe" templates for further extensions and modifications. This helps new members on the project to find their way around. And good (maintenance) developers see these implicit conventions in surrounding style and patterns, adapt to them and work according to them. Uniform code makes it easier for them to adapt to the new code base and "get to speed".

Broken Window AboveBroken Windows
As the pragmatic programmers toughed us in rule 4, broken windows are causing problems in real as well as virtual life. If the conventions are visible and clear, who would dare to stand out in breaking them? On the other hand, if e.g. a piece of code is formatted in three different ways, there is no shame in introducing a new style. People tend to stick with the things they know, because it is faster and feels more secure. (And usually we think it to be superior to things we do not know.) This leads to patchworks in the code. This mixture gets maintained (read copied) from time and time and all the broken windows get spread throughout the code base, the differences live on and grow.

So I postulate the 1st Law of Code Quality - Code Consistency. This applies to source layout, naming and other coding conventions, typical code fragments (also called idioms, most of the time some boilerplate code), design concerns, layering, architecture, used libraries, technologies etc. Consistency in the code is the most important issue. Failing to have a consistent code base will cause all the troubles known from mixed designs, mixed technologies, making it difficult to maintain and get new people into the team.

Living with Changes
Of course we have to change things again and again. It's easy to have consistency in simple conventions, like formatting, just use Eclipse format on the whole source tree and put Checkstyle into your build. Short code idioms like getting a database connection are more difficult. These are rarely documented, but once you have them unified, a Ruby or Groovy script can do almost any syntactical change to your code using powerful regular expressions. More complex changes, e.g. replacing EJB 2 with something else, is more involved. However just don't make the mistake to leave the old stuff as it is. No excuses about small budget, needed retest, pending dead-lines and such! If you can't convert the old stuff, leave it as it is. If you are too "weak" to change it, you earned staying with it. Bringing in new technologies needs a strong plan, better a script, to convert everything existing to the new style. All remaining code has to be changed to use the new technologies as they are supposed to. Typical idioms and best practices of that technology should be obvious at the end. You don't want a Java program to be coded like old C, a few classes with lots of static, monolithic methods, static data etc. The same is true for any refactoring.

A Brighter Future?
In keeping your code base consistent you need help. Especially in the beginning we need someone who flags new inconsistencies, reminds us of conventions. Use static code analysis as soon as you have identified a consistency target. (That might be as simple as grep *.java.) The proper (consistent) way to do something should be enforced from day one. Unfortunately documenting it is not enough. For layout use tools like Checkstyle. Other conventions and boilerplate code can be checked with tools, that support custom rule definition, e.g. Findbugs or PMD. Nowadays many tools come with a large number of base rules that cover common stuff. Most likely some are of use. Modularity and layering is enforced with reference checking, e.g. Macker or SonarJ. With some fantasy (and enough computing power at your build machine) you can create quite sophisticated checks.

24 February 2009

equals and hashCode Generation

Recently we discussed equals() and hashCode() implementations. A proper implementation is not trivial, as "father" Bloch showed us years ago. Back in 2004 I wrote a plugin for Eclipse 2 to generate these methods. (Unfortunately I never managed to publish it. I know I am weak ;-) My home grown solution would produce something like
public int hashCode() {
  long bits;
  int result = 17;
  result = 37 * result + (aBoolean ? 1231 : 1237);
  result = 37 * result + (int)
    ((bits = Double.doubleToLongBits(aDouble)) ^ (bits >> 32));
  result = 37 * result + (int) (aLong ^ (aLong >> 32));
  result = 37 * result + anInt;
  if (anObject != null) {
    result = 37 * result + anObject.hashCode();
  if (anArray != null) {
    result = 37 * result + anArray.hashCode();
  return result;
public boolean equals(Object obj) {
  if (this == obj) {
    return true;
  else if (obj == null || getClass() != obj.getClass()) {
    return false;
  final Homegrown o = (Homegrown) obj;
  return (aBoolean == o.aBoolean &&
         aDouble == o.aDouble &&
         aLong == o.aLong &&
         anInt == o.anInt &&
         (anObject == o.anObject ||
           (anObject != null && anObject.equals(o.anObject))) &&
         (anArray == o.anArray ||
           (anArray != null && anArray.equals(o.anArray))));
I know, I know the implementation for arrays is most likely not what you want. (Did I say that I am weak? :-) Since Java 5 one could use the java.util.Arrays class to fix it. Nevertheless, it served me well for some years. There are several other plugins for Eclipse, and Scott McMaster wrote about in 2006. Since version 3.3 (Europa) Eclipse can finally do it on its own:
public int hashCode() {
  final int prime = 31;
  int result = 1;
  result = prime * result + (aBoolean ? 1231 : 1237);
  long temp;
  temp = Double.doubleToLongBits(aDouble);
  result = prime * result + (int) (temp ^ (temp >>> 32));
  result = prime * result + (int) (aLong ^ (aLong >>> 32));
  result = prime * result + anInt;
  result = prime * result +
                   ((anObject == null) ? 0 : anObject.hashCode());
  result = prime * result + Arrays.hashCode(anArray);
  return result;
public boolean equals(Object obj) {
  if (this == obj) {
    return true;
  if (obj == null) {
    return false;
  if (getClass() != obj.getClass()) {
    return false;
  final Eclipse33Java5 other = (Eclipse33Java5) obj;
  if (aBoolean != other.aBoolean) {
    return false;
  if (Double.doubleToLongBits(aDouble) !=
      Double.doubleToLongBits(other.aDouble)) {
    return false;
  if (aLong != other.aLong) {
    return false;
  if (anInt != other.anInt) {
    return false;
  if (anObject == null) {
    if (other.anObject != null) {
      return false;
  else if (!anObject.equals(other.anObject)) {
    return false;
  if (!Arrays.equals(anArray, other.anArray)) {
    return false;
  return true;
Using the ternary operator the hashCode method gets quite compact, but equals is a bit too verbose for my liking. IntelliJ IDEA could always generate these methods. IDEA 7.0 creates something like
public int hashCode() {
  int result;
  long temp;
  result = (aBoolean ? 1 : 0);
  temp = aDouble != +0.0d ? Double.doubleToLongBits(aDouble) : 0L;
  result = 31 * result + (int) (temp ^ (temp >>> 32));
  result = 31 * result + (int) (aLong ^ (aLong >>> 32));
  result = 31 * result + anInt;
  result = 31 * result +
                (anObject != null ? anObject.hashCode() : 0);
  result = 31 * result +
                (anArray != null ? Arrays.hashCode(anArray) : 0);
  return result;
public boolean equals(Object o) {
  if (this == o) {
    return true;
  if (o == null || getClass() != o.getClass()) {
    return false;
  Idea70Java5 original = (Idea70Java5) o;
  if (aBoolean != original.aBoolean) {
    return false;
  if (Double.compare(original.aDouble, aDouble) != 0) {
    return false;
  if (aLong != original.aLong) {
    return false;
  if (anInt != original.anInt) {
    return false;
  if (anObject != null ? !anObject.equals(original.anObject) :
                         original.anObject != null) {
    return false;
  // Probably incorrect - comparing Object[] with Arrays.equals
  if (!Arrays.equals(anArray, original.anArray)) {
    return false;
  return true;
Typical IDEA, with a little fix for +0.0/-0.0 and some warning concerning Arrays.equals, but else totally the same. In fact, all these implementations suck (including my own, which sucks most). All these result = prime * result ... and if ... return false; are definitely not DRY. I always favour Apache Commons Lang builders. A hand-coded solution using them would look like
public int hashCode() {
  return new HashCodeBuilder().
public boolean equals(Object other) {
  if (this == other) {
    return true;
  if (other == null || getClass() != other.getClass()) {
    return false;
  ApacheCommons o = (ApacheCommons) other;
  return new EqualsBuilder().
         append(aBoolean, o.aBoolean).
         append(aDouble, o.aDouble).
         append(aLong, o.aLong).
         append(anInt, o.anInt).
         append(anObject, o.anObject).
         append(anArray, o.anArray).isEquals();
Well, that's much shorter, isn't it.

20 January 2009

Generic Build Server Notification Tray

I like to be notified immediately when our build fails (or at least before someone notices and tells me that it's my fault :-). I really have to know. In fact I am kind of paranoid about it. Ideal would be some kind of system tray notifier, like Team City has one.

Scituate Lighthouse Some years ago at Herold we were using Anthill OS, which is nice but minimalistic and did not offer any notifications. So I made a tray notifier myself using the System Tray Functionality of Java SE 6. The notifier polled the configured build server status page and used regular expressions from a property file to parse it. If a build changed to red then a little popup was shown. Later I was using Cruise Control and the only thing I was able to find was the CruiseControl-Eclipse-Plugin on Google Code. That's quite cool stuff, but I needed something that popped up in my face when the build was red.

So here is my Generic Build Server System Tray Notifier. After unpacking the zip you have to create a startup script or link executing the jar that looks like <path to java 6>\bin\javaw.exe -jar BuildServerSystemTray.jar <path to config> (Source is included in the zip.)

The notifier is generic and needs a properties file. The zip contains sample configurations for Anthill OS 1.7, Cruise Control 2.3 and Hudson 1.2. You will have to customise the configuration. At least the build server URL (server.url) has to be set accordingly. It should be easy to create configurations for other build servers, just set the proper value for the status.pattern property. This property defines a regular expression matching the whole information about a build containing the project name, success or failure and build time. The regex grouping values status.name.group, status.value.group and status.date.group must be set accordingly.

Q: I was checking out the source, why is the package at.kugel used as top level namespace instead of org.codecop? A: Kugel was my coder pseudonym in the 80ties when I started coding for Commodore 64. I'm using it from time to time when I'm feeling retro. Kugel is German for ball or sphere. The name was coined by my first coding buddy because I was quite overweight.