Unfortunately checking Java code for compliance with a given software architecture is not done on every project, as far as I know only a few experts do it. This is not good because "if it's not checked, it's not there". Free tool support for enforcing architectural rules has been lacking since long but recently things have changed. Well, maybe not recently but recently I thought about it :-)
Macker
There have always been a few basic tools to check the references of classes. One of these tools is Macker. It's quite old and not actively maintained any more, but I've been using it for years and it worked great for me. It's small and simple and yet immensely powerful. I love it and use it whenever possible. Everybody should use it. And don't turn it off!
A Shameless Plug
Macker and some other tools that can be used to enforce architecture rules are described in the fourth part of my 'Code Cop' series, published in the German magazine iX last summer: Automatisierte Architektur-Reviews (Automated Architecture Reviews) (iX 6/2010).
Architecture Reviews with Ant
The article describes several free tools (PMD, Macker and Classycle) and how to use them with Ant to verify different aspects of an architecture. Each of them has its advantages and disadvantages and when used together they are useful. The examples of the Ant integration and the PMD/Macker/Classycle architecture rules as given in the article should help you getting started with your own checks.
Macker and Maven
As I said above, Macker comes with proper Ant support, but Maven integration has been lacking. Last year I wanted to use Macker on an Maven project but was disappointed by the immaturity of the MackerMavenPlugin available on Codehaus. So I had to enhance it a bit. I submitted some patches which got accepted but still the MOJO wouldn't get promoted out of its sandbox state. Fortunately I keep an unofficial release (0.9) in my own Maven repository. So finally proper Maven integration of Macker is available.
Other Tools
I guess there are other tools available to be used with Maven. For example there is Architecture Rules with its maven-architecture-rules-plugin which uses JDepend under the hood. It looks promising but I haven't used it in production yet. And I hear that Sonar 2.4 is able to check architecture rules as well but I didn't try it till now.
(List of all my publications with abstracts.)
28 December 2010
16 December 2010
ASUS Eee PC and TRIM
Last year, after commuting for more than ten years I got tired of reading on the train. I wanted to make better use of the time and got myself one of these small sub-notebooks. I chose an ASUS Eee PC S101. Although it's not very powerful it is able to handle small Eclipse projects. It's a slick device and I love it.
The Problem with SSDs
It contains a ridiculously small SSD hard drive, an "ASUS-JM S41 SSD". Recently after the drive was full for the first time, disc performance degraded catastrophic. Whenever the disc was accessed the computer would freeze one or two seconds. The whole device got totally unusable. I was already fearing the worst.
TRIM to the Rescue?
When searching the web I learned that all SSD share the same problem of free space management which is fixed by the TRIM command. TRIM is a standard command defined by the (S)ATA specification. Unfortunately Windows 7 is the first version to make use of TRIM and there are no plans to port it back to earlier versions. (I also found references to WIPE but I don't know if it's a command some drives implement or just another name for the process of trimming.)
Vendor Tools
Some SSD vendors have noticed the need for SSD TRIM and provide tools of their own. Some vendors provide firmware upgrades like OCZ. Others offer special tools. For example there is a tool called
I could not believe it. I didn't own the first SSD on this planet. How was I going to fix it? Format the SSD and install the OS all over? Not if I could help it. Probably I had not searched the web long enough...
From 0x00 to 0xFF
One entry in a forum suggested that Piriform CCleaner's Secure Wipe trims the disc. Well it doesn't but it seems that some SSDs reclaim a block when it's filled with some data and that's what Secure Wipe is doing. It overwrites all empty blocks. Someone has written a little program to do exactly that: "AS FreeSpaceCleaner with FF" (aka "AS Cleaner 0.5") is meant to work around not having TRIM and is like a generic wiper that works on any drive. It creates a big file and uses it to write zeros in all empty blocks. It has one option, to use 0xFF instead of 0x00 to fill these blocks. Some forum entries suggested that people have successfully used the 0xFF option to trim their SSDs.
Finally Whole Again
The short story is that I managed to restore performance of the SSD in my Eee PC using
The Problem with SSDs
It contains a ridiculously small SSD hard drive, an "ASUS-JM S41 SSD". Recently after the drive was full for the first time, disc performance degraded catastrophic. Whenever the disc was accessed the computer would freeze one or two seconds. The whole device got totally unusable. I was already fearing the worst.
TRIM to the Rescue?
When searching the web I learned that all SSD share the same problem of free space management which is fixed by the TRIM command. TRIM is a standard command defined by the (S)ATA specification. Unfortunately Windows 7 is the first version to make use of TRIM and there are no plans to port it back to earlier versions. (I also found references to WIPE but I don't know if it's a command some drives implement or just another name for the process of trimming.)
Vendor Tools
Some SSD vendors have noticed the need for SSD TRIM and provide tools of their own. Some vendors provide firmware upgrades like OCZ. Others offer special tools. For example there is a tool called
wiper.exe
provided for the G.SKILL Falcon drives and maybe some other drives by G.SKILL. Unfortunately wiper
crashes when run on the Eee PC. Intel offers its own Intel SSD Toolbox but the TRIM option is not available when run on the Eee PC. These two were the only tools supporting TRIM that I could find. Bad luck here too.I could not believe it. I didn't own the first SSD on this planet. How was I going to fix it? Format the SSD and install the OS all over? Not if I could help it. Probably I had not searched the web long enough...
From 0x00 to 0xFF
One entry in a forum suggested that Piriform CCleaner's Secure Wipe trims the disc. Well it doesn't but it seems that some SSDs reclaim a block when it's filled with some data and that's what Secure Wipe is doing. It overwrites all empty blocks. Someone has written a little program to do exactly that: "AS FreeSpaceCleaner with FF" (aka "AS Cleaner 0.5") is meant to work around not having TRIM and is like a generic wiper that works on any drive. It creates a big file and uses it to write zeros in all empty blocks. It has one option, to use 0xFF instead of 0x00 to fill these blocks. Some forum entries suggested that people have successfully used the 0xFF option to trim their SSDs.
Finally Whole Again
The short story is that I managed to restore performance of the SSD in my Eee PC using
FreeSpaceCleaner.exe
. The long story is as follows:- First I did a BIOS upgrade. Disc access might benefit from a newer BIOS. I'm not sure if it's part of the solution but now that it's done it's done.
- Then I reduced disc writes as much as possible. I turned off the index service, removed the swap file and disabled the recording of the last file access. This is not related to restoring SSD performance, but it's supposed to keep it performing a bit longer.
- After that I uninstalled all programs which I didn't need and used CCleaner to remove all temporary crap. I think it's vital to have as much free space as possible so the SSD doesn't run out of "clean" blocks too soon. Some forum entries suggested that it's beneficial to have at least 50% of the disc free.
- In the end I used
FreeSpaceCleaner
with the FF option to wipe the SSD and it worked! At least it did something as SSD performance definitely improved after using it but I doubt it was able to do a full TRIM on the disc because I have to use it quite often.
FreeSpaceCleaner
the problem was solved. (Download FreeSpaceCleaner with FF)
15 December 2010
Code Quality Assurance v2
Last week I gave my presentation on code quality assurance, for the second time this year, and it looks like it will become an integral part of the lecture. That would be great. Maybe one or another soon-to-be developer will get interested by the ways of the craftsmanship and not become a Duct Tape programmer.
I was quite nervous in the beginning of the presentation as I had never spoken in front of so few people :-) Honestly, giving it the second time helped me a lot and I was much more relaxed and didn't hide behind my laptop most of the time. There were few students, but they asked clever questions in the end. Only one student complained about too much mathematics needed to solve the prime factors kata. Well it's not that complicated, but maybe I will try the word wrap kata next time.
Dear Students
There is a list of references at the end of the slides, only a few but you should read them. Go ahead, read them now! I will wait here. If you are desperate you might listen to the recording of last year's quality assurance presentation. It's missing the demos, but my explanations should give you a general idea of what's going on. Good luck and don't succumb to the dark side!
I was quite nervous in the beginning of the presentation as I had never spoken in front of so few people :-) Honestly, giving it the second time helped me a lot and I was much more relaxed and didn't hide behind my laptop most of the time. There were few students, but they asked clever questions in the end. Only one student complained about too much mathematics needed to solve the prime factors kata. Well it's not that complicated, but maybe I will try the word wrap kata next time.
Dear Students
There is a list of references at the end of the slides, only a few but you should read them. Go ahead, read them now! I will wait here. If you are desperate you might listen to the recording of last year's quality assurance presentation. It's missing the demos, but my explanations should give you a general idea of what's going on. Good luck and don't succumb to the dark side!
26 November 2010
Devoxx 2010
From 17th to 19th of November 2010 was the Devoxx in Antwerp. Here are some random notes about the sessions that I attended. I should have tweeted them right away, but I didn't, so a post will have to do.
Day 1
At the beginning Stephan gave us a warm welcome. He was motivated as ever. :-) The following keynote on SE 7 by Mark Reinhold was nice but he was telling the same story what would be shipped in Java 7. I'm tired of hearing it. Cut the crap! Ship it! Just ship anything!
The first session was held by Heinz Kabutz about mad reflection. After years of reading his newsletters it was nice to see him in person. The title of his talk was very proper: madness! Thank you Heinz for this "mad" talk. It was very informative.
Joshua Bloch's performance talk was so full that the organisers closed the room 15 minutes before the session started. I did not make it in :-( So I had to change my plans and went for the "new stuff in Scala 2.8" presentation. Bill Venners gave many real code examples, but I didn't comprehend most of them. That's probably my fault. It's been some time since I had a look in their book. (@Devoxx Team: Maybe you could organise some kind of dynamic overflow room for sessions of well known speakers the next time, just to be safe?)
In the evening I wanted to attend the BOF about the state of Java SE 7. But the room was totally full and there was no fresh air. I started growing a headache and had to leave. Later that evening I did my first steps in Android development during the Visage BOF. Visage turned out to be cool stuff. Unfortunately the setup of the development environment took too much time. In the end Stephen Chin went through a working example in only a few minutes.
Day 2
The second day started with a presentation about the future of Java EE: "Bla bla cloud bla bla virtualisation bla bla service lookup bla bla more flexible bla bla specification bla bla". Hell this talk was boring! Fortunately the following talk by Kito Mann about GTD made it all up.
Later George Reese talked about how to operate at the cloud scale. His talk was a bit abstract, but contained nice slides. Just my style of presentation :-)
During lunch break I listened to a quickie by Costin Leau about
After lunch Kirk Pepperdine showed how to extend visual VM. It seemed easy. It would be great to use it for a combined view of all our monitored applications. Maybe I will give it a try.
Then there was the JavaPosse. It was horribly boring. I've never listened to them before. It's just not my style of humour. So I left the room to hear a presentation about HBase at Facebook by Jonathan Gray, who gave insights into a very large system. Especially the questions (in fact the answers) revealed interesting details: Distribution using bit torrent and deploying to a cluster of self-made app servers which are all monitored with JMX. Cool, cool, cool.
William Pugh, the creator of Findbugs presented some valid points about defects, e.g. "finding defects in code is so easy". He told interesting war stories about bugs at Google making it a practical presentation.
I had not made it into Joshua Bloch's first talk but at least managed to get a decent seat for his puzzler talk. And I got the first puzzler right! Oops, I hadn't known that
In the evening I attended the NoSQL BOF, my first real BOF. Everybody was very friendly and it I enjoyed listening to real world users of NoSQL. But in my opinion the committers and heavy users were unfair complaining about JVM memory management problems. It's obvious that data stores need a lot of memory. Thank you guys for sharing some thoughts with me NoSQL newbie.
Day 3
The third day started with a discussion panel about the future of Java. It was quite informative. The Paris JUG leader said that Java user groups are in fact JVM user groups That is true and it indicates the change in the Java ecosystem. Another interesting bit was the notion of comprehending
Then we had a very fast paced vanilla Adam Bien. He demoed all the good things one is able to do with Glassfish in J2EE. He did some "no risk no fun"-styled live coding with a cinematic version of the Java pet store including aliens and predators. He had almost no slides. It was hilarious :-).
Last but not least my favourite session were the "boilerplate busters", i.e. project Lombok. Thank you guys for your work to get rid of ugly code. I will definitely start using Lombok at once. And thank you for making it such an entertaining presentation. You guys are great! It's a pity that there are no pictures of this talk.
Conclusion
Devoxx 2010 was a great conference. 12 out of 17 talks were excellent and I managed to loot some t-shirts from the exhibition. It was just a bit too full, but Stephan promised that they would close registration sooner next year. So I will definitely visit Devoxx again in 2011.
Disclaimer
This post kinda sucks because I wrote it on my Android on the plane during the flight home, and on the train, and on the subway ...
Day 1
At the beginning Stephan gave us a warm welcome. He was motivated as ever. :-) The following keynote on SE 7 by Mark Reinhold was nice but he was telling the same story what would be shipped in Java 7. I'm tired of hearing it. Cut the crap! Ship it! Just ship anything!
The first session was held by Heinz Kabutz about mad reflection. After years of reading his newsletters it was nice to see him in person. The title of his talk was very proper: madness! Thank you Heinz for this "mad" talk. It was very informative.
Joshua Bloch's performance talk was so full that the organisers closed the room 15 minutes before the session started. I did not make it in :-( So I had to change my plans and went for the "new stuff in Scala 2.8" presentation. Bill Venners gave many real code examples, but I didn't comprehend most of them. That's probably my fault. It's been some time since I had a look in their book. (@Devoxx Team: Maybe you could organise some kind of dynamic overflow room for sessions of well known speakers the next time, just to be safe?)
In the evening I wanted to attend the BOF about the state of Java SE 7. But the room was totally full and there was no fresh air. I started growing a headache and had to leave. Later that evening I did my first steps in Android development during the Visage BOF. Visage turned out to be cool stuff. Unfortunately the setup of the development environment took too much time. In the end Stephen Chin went through a working example in only a few minutes.
Day 2
The second day started with a presentation about the future of Java EE: "Bla bla cloud bla bla virtualisation bla bla service lookup bla bla more flexible bla bla specification bla bla". Hell this talk was boring! Fortunately the following talk by Kito Mann about GTD made it all up.
Later George Reese talked about how to operate at the cloud scale. His talk was a bit abstract, but contained nice slides. Just my style of presentation :-)
During lunch break I listened to a quickie by Costin Leau about
@inject
. He made it very fast paced. It was a good overview when one already knew dependency injection. I wish more presentations about technologies would be that compact. Well done!After lunch Kirk Pepperdine showed how to extend visual VM. It seemed easy. It would be great to use it for a combined view of all our monitored applications. Maybe I will give it a try.
Then there was the JavaPosse. It was horribly boring. I've never listened to them before. It's just not my style of humour. So I left the room to hear a presentation about HBase at Facebook by Jonathan Gray, who gave insights into a very large system. Especially the questions (in fact the answers) revealed interesting details: Distribution using bit torrent and deploying to a cluster of self-made app servers which are all monitored with JMX. Cool, cool, cool.
William Pugh, the creator of Findbugs presented some valid points about defects, e.g. "finding defects in code is so easy". He told interesting war stories about bugs at Google making it a practical presentation.
I had not made it into Joshua Bloch's first talk but at least managed to get a decent seat for his puzzler talk. And I got the first puzzler right! Oops, I hadn't known that
EnumMap
's entrySet()
is that weird. Again, I hadn't anticipated the regular expression backtracking, although the pattern had looked weird to me. No - 1:3 for the puzzler brothers. Finally I got one right, well, most people did. And I got the last one right, too. Only few people noticed the lowercase el. Final score 3:3!In the evening I attended the NoSQL BOF, my first real BOF. Everybody was very friendly and it I enjoyed listening to real world users of NoSQL. But in my opinion the committers and heavy users were unfair complaining about JVM memory management problems. It's obvious that data stores need a lot of memory. Thank you guys for sharing some thoughts with me NoSQL newbie.
Day 3
The third day started with a discussion panel about the future of Java. It was quite informative. The Paris JUG leader said that Java user groups are in fact JVM user groups That is true and it indicates the change in the Java ecosystem. Another interesting bit was the notion of comprehending
<? super T>
being a dividing line between journeymen and experts. This reminds me of Joel Spolsky's old post about the "pointers business". Are bounded wildcards Java's "pointers"?Then we had a very fast paced vanilla Adam Bien. He demoed all the good things one is able to do with Glassfish in J2EE. He did some "no risk no fun"-styled live coding with a cinematic version of the Java pet store including aliens and predators. He had almost no slides. It was hilarious :-).
Last but not least my favourite session were the "boilerplate busters", i.e. project Lombok. Thank you guys for your work to get rid of ugly code. I will definitely start using Lombok at once. And thank you for making it such an entertaining presentation. You guys are great! It's a pity that there are no pictures of this talk.
Conclusion
Devoxx 2010 was a great conference. 12 out of 17 talks were excellent and I managed to loot some t-shirts from the exhibition. It was just a bit too full, but Stephan promised that they would close registration sooner next year. So I will definitely visit Devoxx again in 2011.
Disclaimer
This post kinda sucks because I wrote it on my Android on the plane during the flight home, and on the train, and on the subway ...
23 October 2010
Concepts of Functional Programming
Last week I had the pleasure to give a presentation at Javaabend in Vienna. Javaabend (German for Java evening) is a local Java user group event organised by openForce Information Technology at irregular intervals.
A Little Rant
This presentation has a long history, so I will start with a little rant. Last year when I started playing around with Scala, we (read some enthusiastic employees) formed an informal study group to have a look at functional languages and Scala in particular. In the beginning we made good progress and had quite some fun and met biweekly. Unfortunately the organisation had a strange attitude to training (as well as to public relations) and we were disbanded. Being stubborn as I am, I managed to establish a budget from "another source" after some time and started preparing this presentation for the monthly "Developer Round Table". The presentation was postponed several times and in the end I left the company for good.
Scope of Presentation
Now let's come back to the presentation. Talking about the principles of functional programming is a bit off-topic for the Code Cop and it's just scratching the surface of the core principles: purity, higher order functions, closures, currying, continuations and (well not really) monads. I'm no expert on functional programming, so feel free to comment corrections or clarifications. Especially the concept of monads is a bit mysterious.
Slides
Download the Concepts of Functional Programming slides. As usual the slides are not very useful without my explanations because they entirely consist of single words and/or images. This is my take on the current presentation style. I received some good feedback on the style and especially the images. One attendee even told me that the images were "too" good for him, he was distracted by them. (Thank you Flickr community for all these wonderful CC licensed images.)
Discussion
After the presentation there was an interesting discussion on the advantages of functional programming over the imperative style, e.g. Java.
Resources
Researching the core principles of functional programming was part of a System One Research Day.
A Little Rant
This presentation has a long history, so I will start with a little rant. Last year when I started playing around with Scala, we (read some enthusiastic employees) formed an informal study group to have a look at functional languages and Scala in particular. In the beginning we made good progress and had quite some fun and met biweekly. Unfortunately the organisation had a strange attitude to training (as well as to public relations) and we were disbanded. Being stubborn as I am, I managed to establish a budget from "another source" after some time and started preparing this presentation for the monthly "Developer Round Table". The presentation was postponed several times and in the end I left the company for good.
Scope of Presentation
Now let's come back to the presentation. Talking about the principles of functional programming is a bit off-topic for the Code Cop and it's just scratching the surface of the core principles: purity, higher order functions, closures, currying, continuations and (well not really) monads. I'm no expert on functional programming, so feel free to comment corrections or clarifications. Especially the concept of monads is a bit mysterious.
Slides
Download the Concepts of Functional Programming slides. As usual the slides are not very useful without my explanations because they entirely consist of single words and/or images. This is my take on the current presentation style. I received some good feedback on the style and especially the images. One attendee even told me that the images were "too" good for him, he was distracted by them. (Thank you Flickr community for all these wonderful CC licensed images.)
Discussion
After the presentation there was an interesting discussion on the advantages of functional programming over the imperative style, e.g. Java.
- Is it easier to get things done with many lines of simple, imperative code, compared to one line of functional code (that most definitely does not look simple when you are new to the area)?
- Are the functional paradigms more difficult to comprehend? Is this the reason that functional programming isn't used as widespread as the imperative one? Would the average developer produce bad quality code when using functional languages?
Resources
- Slava Akhmechet explains functional programming for the rest of us. This is a nice overview and definitely worth reading.
- Tony Morris posted slides of his talk what functional programming means.
- Some code examples were taken from Ruby Blocks, Closures, and Continuations.
- A good resource for all concepts is Wikipedia.
Researching the core principles of functional programming was part of a System One Research Day.
17 October 2010
Android Browser Bookmarks
In my previous post I already mentioned my Android phone. Soon after I got it, I wanted to import my bookmarks from my desktop PC to its standard browser. So being green I copied my favourites to the sdcard and tried to open them. Well it didn't work.
Working Around
If I lived in "Google-land", I could use Google Bookmarks. But I don't and had to look for alternatives. After some searching I found a possible way to import bookmarks:
import the bookmarks directly into the browser's database. That would be cool. But I'm not bold enough to root my new phone and "play with that, probably break it a bit - and then cry about it later." (Ward from androidcommunity.com)
My Solution
On several places I read hints that some Android apps were able to import bookmarks, but I couldn't find any. Instead I found Matthieu Guenebaud's Bookmarks Manager. It's able to backup and restore the browser bookmarks and uses a plain zip file to store them.
Working Around
If I lived in "Google-land", I could use Google Bookmarks. But I don't and had to look for alternatives. After some searching I found a possible way to import bookmarks:
- Upload your single html file of exported bookmarks to a document on Google Docs.
- In your browser navigate to said page.
- Manually click on each link and save as bookmark.
import the bookmarks directly into the browser's database. That would be cool. But I'm not bold enough to root my new phone and "play with that, probably break it a bit - and then cry about it later." (Ward from androidcommunity.com)
My Solution
On several places I read hints that some Android apps were able to import bookmarks, but I couldn't find any. Instead I found Matthieu Guenebaud's Bookmarks Manager. It's able to backup and restore the browser bookmarks and uses a plain zip file to store them.
Viewing .ZIP: Bookmarks_2010-09-10_14-11-24.zip Length Method Size Ratio Date Time Name ------ ------ ----- ----- ---- ---- ---- 2273 DeflatN 710 68.8% 09.10.2010 2:11p bookmarks.xml 526 DeflatN 531 0.0% 09.10.2010 2:11p 23.png 764 DeflatN 769 0.0% 09.10.2010 2:11p 24.png 326 DeflatN 331 0.0% 09.10.2010 2:11p 51.png 684 DeflatN 689 0.0% 09.10.2010 2:11p 57.png 239 DeflatN 238 0.5% 09.10.2010 2:11p 69.png 541 DeflatN 546 0.0% 09.10.2010 2:11p 90.png 1266 DeflatN 1271 0.0% 09.10.2010 2:11p 198.png 490 DeflatN 495 0.0% 09.10.2010 2:11p 164.png 304 DeflatN 309 0.0% 09.10.2010 2:11p 124.png 408 DeflatN 413 0.0% 09.10.2010 2:11p 229.png ------ ----- ----- ---- 7821 6302 19.5% 11The file
bookmarks.xml
has a simple XML structure of <bookmark>
s inside a <bookmarks>
element. Yes, that's something I can use.- Backup the bookmarks, even if empty, to get an initial file.
- Unpack the archive.
- Insert bookmarks into the existing XML structure.
- Repack the archive.
- Restore from the modified zip file.
- Enjoy your new wealth of bookmarks.
bookmarks.xml
by hand, here is some Scala code.:val bookmarkXml = scala.xml.XML.loadFile(targetFolder + "/bookmarks.xml") val lastOrder = Integer.parseInt((bookmarkXml \\ "order").last.text) val oldNodes = bookmarkXml \\ "bookmark" val newNodes = to_xml_list(bookmarks, lastOrder + 1000) val root = <bookmarks>{ oldNodes }{ newNodes }</bookmarks> scala.xml.XML.save(targetFolder + "/bookmarks.xml", root, "UTF8", true, null)Thanks to Scala's excellent XML support, reading and modifying the bookmarks file is easy. The method
to_xml_list()
iterates all favourites and creates XML fragments for each one using the following method.def to_xml(bm: Favorite, order: Int) = { <bookmark> <title>{ bm.name }</title> <url>{ bm.url }</url> <order>{ order }</order> <created>{ bm.fileDate.getTime }</created> </bookmark> }
Favorite
is a class representing an Internet Explorer favourite that I wrote long ago. (Yeah baby, code reuse!) Value order
is the number Bookmark Explorer uses for sorting bookmarks. See the complete source of GuenmatBookmarks.scala.14 October 2010
Digressions
Two months ago I got a brand new Android phone from my employer. I asked for it to play around and get used to Android development and bingo - it was here :-) This is a major event compared to my prior experiences with employers. But I'm digressing.
So I've been using it for two months. I'm still not sure if it's good for people that just want to talk or text, but it's definitely great to access the internet on the go. So I started catching up with my backlog of articles in Google Reader. Currently I'm back in January 2010, so there is hope that the folder with articles to read will be empty some day. (Alas, it's just one of four, so it looks like I will stay busy for some more months.)
Today I came across Ted Neward's Predictions. I like to read Ted's posts because he doesn't take himself too serious. I especially enjoyed his busy Java developer's guide to Scala two years ago, but I'm digressing again. In his predictions he talked about different technologies and some well known companies. I will not repost his post, but he made some quite sarcastic and funny remarks that I need to share.
So I've been using it for two months. I'm still not sure if it's good for people that just want to talk or text, but it's definitely great to access the internet on the go. So I started catching up with my backlog of articles in Google Reader. Currently I'm back in January 2010, so there is hope that the folder with articles to read will be empty some day. (Alas, it's just one of four, so it looks like I will stay busy for some more months.)
Today I came across Ted Neward's Predictions. I like to read Ted's posts because he doesn't take himself too serious. I especially enjoyed his busy Java developer's guide to Scala two years ago, but I'm digressing again. In his predictions he talked about different technologies and some well known companies. I will not repost his post, but he made some quite sarcastic and funny remarks that I need to share.
- "Cloud" will become the next "ESB" or "SOA", in that it will be something that everybody will talk about, but few will understand and even fewer will do anything with. --- Well another one in the list of useless buzzwords that lacks a clear definition. Last year I was working on some enterprise integration project and the Czech bank was using AquaLogic ESB. I asked our rock star enterprise architects what exactly defines an ESB. They were not able to give a proper answer. According to the German Wikipedia the term ESB was defined by Gartner in 2002. So it's no wonder that ESB is not proper defined.
- Being "REST"ful will equate to "I did it myself!"
- Agile has become another adjective meaning "best practices", and as such, has essentially lost its meaning. --- What does best practice mean anyway? Most likely it is a cookbook and a collection of things that work. According to Andy Hunt best practices are for people that need guidelines. But these guidelines never capture the whole context of the problem. And Ted really hates best practices.
- Try this: walk into a functional language forum, and ask what a monad is. Nobody yet has been able to produce an answer that doesn't involve math theory, or that does involve a practical domain-object-based example. In fact, nobody has really said why (or if) monads are even still useful. --- When diving into Scala two years ago, I tried to figure out monads. James Iry did a decent job explaining what they do, but I still have no idea what they are or why I should use them. (But probably that's different for pure languages like Haskell.)
5 September 2010
Maven Plugin Testing Tools
Obviously your MOJOs (read Maven Plugins) need to be tested as detailed as all your other classes. Here are some details on using the Maven Plugin Testing Tools and how I used them for the Macker Maven Plugin.
Unit Testing
To unit test a MOJO create a JUnit test case and extend
Stubs
If your MOJO needs more complex parameters, e.g. a
Integration Testing
Integration testing is done with the
The test uses
Setup
The Plugin Testing Tools do not provide an abstract test case. Each test has to create it's own
Execution
Integration tests should be executed in the
Help Mojo Workaround
Unfortunately there is a problem with the above approach. It works fine for modules which do not contain Maven Plugins. But it fails during integration test preparation when the
So far so good. The
Acknowledgement
Experimenting with the Maven Plugin Testing Tools was part of a System One Research Day. Thank you System One for supporting Open Source :-)
Unit Testing
To unit test a MOJO create a JUnit test case and extend
AbstractMojoTestCase
. It comes with the maven-plugin-testing-harness
:<dependency> <groupId>org.apache.maven.plugin-testing</groupId> <artifactId>maven-plugin-testing-harness</artifactId> <version>1.2</version> <scope>test</scope> </dependency>Note that version 1.2 of the
maven-plugin-testing-harness
is still Maven 2.0 compatible, as will be version 1.3. The alpha release of version 2.0 is already based on Maven 3. In the test case the MOJO is initialised with a pom fragment and executed. The basic usage is well documented in the Maven Plugin Harness documentation. The usual folder structure for plugin unit tests is. |-- pom.xml \-- src |-- main \-- test |-- java | \-- MojoTest.java \-- resources \-- unit |-- test-configuration1 | \-- resources for this test if any |-- test-configuration1-plugin-config.xml \-- test-configuration2-plugin-config.xmlThe class
MojoTest
contains methods testConfiguration1()
and testConfiguration2()
. There are several Maven projects using this approach, just do a code search.Stubs
If your MOJO needs more complex parameters, e.g. a
MavenProject
or an ArtifactRepository
, these have to be provided as stubs. Stubs are simple implementations of real Maven objects, e.g.public class org.apache.maven.plugin.testing.stubs.ArtifactStub implements org.apache.maven.artifact.Artifactand have to be configured in the pom fragment (
test-configuration1-plugin-config.xml
) as described in the Maven Plugin Testing Harness Cookbook. See also MackerMojoTest in the Macker Maven Plugin for an extensive example. There are several stubs to simulate Maven objects such as ArtifactHandler
or ArtifactResolver
. Creating stubs gets cumbersome when you need more of Maven's internals. Every object and every method has to be stubbed out.Integration Testing
Integration testing is done with the
maven-plugin-testing-tools
:<dependency> <groupId>org.apache.maven.plugin-testing</groupId> <artifactId>maven-plugin-testing-harness</artifactId> <version>1.2</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.maven.plugin-testing</groupId> <artifactId>maven-plugin-testing-tools</artifactId> <version>1.2</version> <scope>test</scope> </dependency> <dependency> <groupId>org.codehaus.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>1.5.6</version> <scope>test</scope> </dependency>Note that the testing tools 1.2 specifically need
plexus-utils
version 1.5.6. The usual folder structure for plugin integration tests is. |-- pom.xml \-- src |-- main \-- test |-- java | \-- MojoIT.java \-- resources \-- it |-- test-case1 | |-- pom.xml | \-- main | \-- classes and data for this test \-- test-case2The class
MojoIT
contains the test methods testCase1()
and testCase2()
. Folder test-case1
contains a full Maven module that uses the MOJO under test in some way.The test uses
PluginTestTool
, BuildTool
and the other tools from maven-plugin-testing-tools
to create a local repository, install the Maven module under test into it and execute the Maven build of test-case1/pom.xml
. The Plugin Testing Tools site provides more detail about this process and the various tools.Setup
The Plugin Testing Tools do not provide an abstract test case. Each test has to create it's own
AbstractPluginITCase
. A good example is the AbstractEclipsePluginIT of the Eclipse Plugin. It contains methods to build the module, execute poms against it and verify created artefacts. As far as I know this is the only example available. AbstractOuncePluginITCase is a modified copy as well as AbstractMackerPluginITCase.Execution
Integration tests should be executed in the
integration-test
phase.<build> <plugins> <plugin> <artifactId>maven-surefire-plugin</artifactId> <executions> <execution> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> <configuration> <includes> <include>**/*IT.java</include> </includes> <excludes> <exclude>specified only to override config from default execution</exclude> </excludes> </configuration> </execution> </executions> </plugin> </plugins> </build>The
PluginTestTool
modifies the pom.xml
to skip all tests, so the tests are not invoked again when the module is build by the integration test.Help Mojo Workaround
Unfortunately there is a problem with the above approach. It works fine for modules which do not contain Maven Plugins. But it fails during integration test preparation when the
help-mojo
of maven-plugin-plugin
is executed. Fortunately the guys from the Eclipse Plugin found a workaround:<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-plugin-plugin</artifactId> <!-- lock down to old version as newer version aborts build upon no mojos as required during ITs --> <version>2.4.3</version> <executions> <!-- disable execution, makes IT preparation using maven-plugin-testing-tools fail (see target/test-build-logs/setup.build.log) --> <execution> <id>help-mojo</id> <configuration> <extractors> <extractor /> </extractors> </configuration> </execution> </executions> </plugin>But now the plugin can't be built from scratch any more. So they used a profile to run the integration tests and disable
help-mojo
execution.<profiles> <profile> <id>run-its</id> <build> <plugins> <plugin> <artifactId>maven-surefire-plugin</artifactId> ... </plugin> <plugin> <artifactId>maven-plugin-plugin</artifactId> ... </plugin> </plugins> </build> </profile> </profiles>Wrap-up
So far so good. The
BuildTool
has to activate the profile run-its
(as it has to skip help-mojo
execution). This could be done by setting a certain property, let's call it ProjectTool:packageProjectArtifact
. Then the profile would only be activated during integration test preparation.<profiles> <profile> <id>run-its</id> ... <activation> <property> <name>ProjectTool:packageProjectArtifact</name> </property> </activation> </profile> </profiles>I've submitted a patch for that, but in the meantime I had to copy the
BuildTool
into my own plugin, ugh. (I was working towards a clean solution throughout this post but in the end all gets messed up.) The whole plugin testing can be seen in action in the Macker Maven Plugin.Acknowledgement
Experimenting with the Maven Plugin Testing Tools was part of a System One Research Day. Thank you System One for supporting Open Source :-)
Labels:
integration testing,
Java,
Maven,
MOJO,
unit testing
29 August 2010
Productivity Tip: Folder Names
I want to be productive. I like the feeling of GTD. I believe that even small things make a difference in productivity. For example assigning a keyboard shortcut to the calculator application. I don't use the calculator very often, but when I do, I have this warm and cosy feeling that I saved one or two seconds to open it. I'm always assigning keyboard shortcuts. I have been doing it since the early days of Windows 3.1.
One trick I found recently is to name folders beginning with different letters. For example some time ago my main work folder contained subfolders
Finding names can be difficult. They should describe the files inside them. If I don't find a proper synonym or word in a different language, I don't change it.
One trick I found recently is to name folders beginning with different letters. For example some time ago my main work folder contained subfolders
article, code, community, posts, presentation
and resource
. To speed up folder switching I renamed them to article, blog, community, develop, presentation
and resource
. Now all folders start with a different letter. Each folder is uniquely accessible by pressing a single key in explorer or any navigator. The same is true for drives.Finding names can be difficult. They should describe the files inside them. If I don't find a proper synonym or word in a different language, I don't change it.
18 June 2010
GeeCON 2010 in Poznan
I'm quite late with this post. Most people have already commented about GeeCON. Well my posts are never real time. The fact that I was working on my "Code-Cop's Codes" project, converting all my repositories to Mercurial and writing a presentation on knowledge management tools used in software development meant I couldn't get it done earlier. ;-) Still, I definitely want to share my experience of visiting GeeCON 2010, which took place in Poznan over a month ago.
The quality of the presentations was excellent. I really liked all of them. This was the most important thing to me because I went there to see the presentations. I didn't care whether the Wi-Fi was slow or if the food was not particularly tasty. I attend conferences to see new things. (Well the food was nothing special, but who cares?) I have attended other conferences before and there were usually a few presentations that were boring or not very good. However, this was not the case with GeeCON! I was pleasantly surprised. Kudos to the GeeCON team and all the speakers. You did a great job.
Presentations
I will not go into detail about the presentations, but there are some things that are definitely worth mentioning:
The quality of the presentations was excellent. I really liked all of them. This was the most important thing to me because I went there to see the presentations. I didn't care whether the Wi-Fi was slow or if the food was not particularly tasty. I attend conferences to see new things. (Well the food was nothing special, but who cares?) I have attended other conferences before and there were usually a few presentations that were boring or not very good. However, this was not the case with GeeCON! I was pleasantly surprised. Kudos to the GeeCON team and all the speakers. You did a great job.
Presentations
I will not go into detail about the presentations, but there are some things that are definitely worth mentioning:
- Stephan Herrmann talked about Object Teams. It looked interesting, and seemed to be a mixture of anaemic service graphs, rich domain models and aspects. (Stephan, please forgive me for that noob explanation :-)) Fortunately, Stephan is able to attend our upcoming Eclipse DemoCamp. I am looking forward to hearing a more in-depth explanation and getting hands-on experience.
- Staffan Nöteberg explained the Pomodoro Technique. Despite the fact that it was about a technique that can be used to cope with interruptions, his presentation was hilarious. I was roaring with laugher when he pulled a Teletubby out off his rucksack to represent the project manager. Unfortunately I haven't been able to find a pomodoro kitchen timer...
- Dawid Weiss from Poznan University of Technology gave us an insights into Java in high-performance computing. Instead of slides he used one huge image to show all his content and moved, panned and zoomed around the entire presentation. This was quite a dynamic way to do a presentation.
- Towards the end of GeeCON, Bruno Bossola talked about object orientation for non-believers. Why for non-believers? This was because he was mocking us (the audience) all the time, which earned him a few laughs. He was really funny, and he was right: Persistence and frameworks are not that important. However, requirements and domain models are. In a nutshell, proper object oriented analysis and design are relevant. We have forgotten what OO really stands for.
18 May 2010
Upcoming Eclipse DemoCamp
After all the work I put into last year's DemoCamp, I promised myself I wouldn't organise another one in the near future. Well, it didn't work out like that. Michael persuaded me to organise another one. So, the second Vienna Eclipse DemoCamp will be at the end of June.
Show Me Code - No Slides - No Suits ;-)
We plan to have a different style this year and want to create a more in depth and personal experience. Instead of fancy slides we want to do some live coding, guided by the presenter's vast knowledge of the topic. Hopefully things will run as smoothly as they did last year.
Show Me Code - No Slides - No Suits ;-)
We plan to have a different style this year and want to create a more in depth and personal experience. Instead of fancy slides we want to do some live coding, guided by the presenter's vast knowledge of the topic. Hopefully things will run as smoothly as they did last year.
7 May 2010
Custom PMD Rules
Last month I submitted my fourth article from the 'Code Cop' series. After Daily Build, Daily Code Analysis and Automated Testing I wrote about checking architecture rules. One approach described there used PMD. In case you don't know what PMD is, it's a static code analysis tool for Java. It was first developed in 2002 and is updated regularly. It checks the abstract syntax tree (AST) of Java code. (PMD is a great tool. You should definitely use it!)
PMD vs. AST (I just love TLAs)
PMD comes with many predefined rules. Some of them are real life-savers, e.g.
I've always been enthusiastic about static code analysis and PMD in particular and have been using it successfully since 2004. (For example see my very first presentation on static code analysis with PMD.) I love it; It's a great tool. I have created several custom rules to enforce consistent coding conventions and to prevent bad coding practices. Today I am going to share some of these rules with you in this post.
First Rule
One common bug is
Copy and Paste
The easiest way to define your own rule is to take an existing one and tweak it. Like
After the last two sections we are warmed up and finally ready for some real stuff. On several occasions I have met developers who wondered why their code
PMD provides several rules to check JUnit tests. One of my favourite rules is
A mistake I keep finding in JUnit 3.x tests is not calling
That's all for now. There are more rules that I could share with you, but this blog entry is already far too long. I've set up a repository pmd-rules containing the source code of the rules described here (and many more).
PMD vs. AST (I just love TLAs)
PMD comes with many predefined rules. Some of them are real life-savers, e.g.
EmptyCatchBlock
. It's possible to define your own rules. The easiest way to do this is to use XPath expressions to match patterns in the AST. For example, the following trivial classpackage org.codecop.myapp.db; import java.sql.Connection; public class DBSearch { ... }is represented inside PMD by the following AST
+ PackageDeclaration | + Name:org.codecop.myapp.db + ImportDeclaration | + Name:java.sql.Connection + TypeDeclaration + ClassOrInterfaceDeclaration:DBSearch + ClassOrInterfaceBody + ...An XPath expression to find imports from the
java.sql
package looks like//ImportDeclaration[starts-with(Name/@Image, 'java.sql.')]It is quite simple (at least when you are familiar with XPath ;-) For further details read the PMD XPath Rule Tutorial.
I've always been enthusiastic about static code analysis and PMD in particular and have been using it successfully since 2004. (For example see my very first presentation on static code analysis with PMD.) I love it; It's a great tool. I have created several custom rules to enforce consistent coding conventions and to prevent bad coding practices. Today I am going to share some of these rules with you in this post.
First Rule
One common bug is
public boolean equals(MyClass o)
instead of public boolean equals(Object o)
which shows that the developer is trying (and failing) to override the equals(Object)
method of Object
in MyClass
. It will work correctly when invoked directly with a MyClass
instance but will fail otherwise, e.g. when used inside collections. Such suspiciously close (but still different) equals
methods are matched by//ClassDeclaration//MethodDeclarator [@Image = 'equals'] [count(FormalParameters/*)=1] [not ( FormalParameters//Type/Name[ @Image='Object' or @Image='java.lang.Object' ] ) ]This rule explained in plain English matches all the method declarations that are named
equals
, and have one parameter which type is neither Object
nor java.lang.Object
. (Object
is mentioned twice because PMD analyses the source and therefore can't know about simple and full qualified class names.) Later, Tom included this SuspiciousEqualsMethodName
rule into PMD's default set of rules.Copy and Paste
The easiest way to define your own rule is to take an existing one and tweak it. Like
SuspiciousEqualsMethodName
was derived from SuspiciousHashcodeMethodName
, the next JumbledIterator
is quite similar to JumbledIncrementer
. (JumbledIterator
was created by Richard Beitelmair, one of my colleagues who appointed me "Code Cop" and presented me with my first Code Cop T-shirt.) So what's wrong with the following line?for (Iterator it1 = iterator(); it2.hasNext(); ) { ... }Most likely
it2
should be it1
, shouldn't it. Richard created the following rule to pick up on these kinds of errors://ForStatement[ ( ForInit//ClassOrInterfaceType/@Image='Iterator' or ForInit//ClassOrInterfaceType/@Image='Enumeration' ) and ( ends-with(Expression//Name/@Image, '.hasNext') or ends-with(Expression//Name/@Image, '.hasMoreElements') ) and not ( starts-with(Expression//Name/@Image, concat(ForInit//VariableDeclaratorId/@Image, '.')) ) ]A Real Environment
After the last two sections we are warmed up and finally ready for some real stuff. On several occasions I have met developers who wondered why their code
Long.getLong(stringContainingANumber)
would not work. Well it worked, but it did not parse the String as they expected. This is because the Long.getLong()
is a shortcut to access System.getProperty()
. What they really wanted was Long.parseLong()
. Here is the UnintendedEnvUsage
rule://PrimaryExpression/PrimaryPrefix/Name[ @Image='Boolean.getBoolean' or @Image='Integer.getInteger' or @Image='Long.getLong' ]Care for Your Tests
PMD provides several rules to check JUnit tests. One of my favourite rules is
JUnitTestsShouldIncludeAssert
which avoids (the common) tests that do not assert anything. (Such tests just make sure that no Exception
is thrown during their execution. This is fair enough but why bother to write them and not add some assert
statements to make sure the code is behaving correctly.) Unfortunately, one "quick fix" for that problem is to add assertTrue(true)
. Rule UnnecessaryBooleanAssertion
will protect your tests from such abominations.A mistake I keep finding in JUnit 3.x tests is not calling
super
in test fixtures. The framework methods setUp()
and tearDown()
of class TestCase
must always call super.setUp()
and super.tearDown()
. This is similar to constructor chaining to enable the proper preparation and cleaning up of resources. Whereas Findbugs defines a rule for that, PMD does not. So here is a simple XPath for JunitSetupDoesNotCallSuper
rule://MethodDeclarator[ ( @Image='setUp' and count(FormalParameters/*)=0 and count(../Block//PrimaryPrefix[@Image='setUp'])=0 ) or ( @Image='tearDown' and count(FormalParameters/*)=0 and count(../Block//PrimaryPrefix[@Image='tearDown'])=0 ) ]Obviously this expression catches more than is necessary: If you have a
setUp()
method outside of test cases this will also be flagged. Also it does not check the order of invocations, i.e. super.setUp()
must be the first and super.tearDown()
the last statement. For Spring's AbstractSingleSpringContextTests
the methods onSetUp()
and onTearDown()
would have to be checked instead. So it's far from perfect, but it has still found a lot of bugs for me.That's all for now. There are more rules that I could share with you, but this blog entry is already far too long. I've set up a repository pmd-rules containing the source code of the rules described here (and many more).
6 May 2010
Umlaut Fail
I enjoy reading while commuting, so I am able to make good use of this time by reading a lot. I save web pages to my phone, print articles or carry magazines around. Yesterday I read an issue of IEEE Computer (March 2009). It was quite good, but I spotted a mistake on the very first page.
Well, who is Gdel supposed to be? Come on IEEE, who is supposed to get this encoding stuff right if you guys can't! ;-)
Well, who is Gdel supposed to be? Come on IEEE, who is supposed to get this encoding stuff right if you guys can't! ;-)
5 April 2010
Evolution of Java Platform Classes
Back in 2006 I started creating a list of all Java classes available in the Java runtime. I wrote about the new classes of Java 5 and Java 6 in the past. Such lists of classes come handy if you want to know all classes of the JDK/ Standard Edition or if you want to be a Java Champion like Heinz Kabutz. I mention Heinz because I mailed him my lists some time ago and he confessed that he had not known that
And there were really interesting things happening. Sometimes private (package access) classes were promoted to public top level ones. This happened in almost all major upgrades,
The opposite change, from public to private class, is rare. I only found one class,
According to my list there are only two classes that ever disappeared completely from the JDK,
Short
and Void
were added with Java 1.1. Shame on you Heinz ;-) He said it was difficult to get these really old Java versions nowadays.And there were really interesting things happening. Sometimes private (package access) classes were promoted to public top level ones. This happened in almost all major upgrades,
- from JDK 1.1 to J2SE 1.2
java.io.ObjectStreamConstants java.io.ObjectStreamField java.net.SocketOptions
- from J2SE 1.2 to J2SE 1.3
java.awt.datatransfer.MimeTypeParseException java.awt.font.TextMeasurer javax.swing.plaf.basic.BasicHTML javax.swing.plaf.metal.MetalInternalFrameTitlePane
- from J2SE 1.3 to J2SE 1.4
java.awt.ScrollPaneAdjustable javax.swing.Popup javax.swing.PopupFactory javax.swing.text.html.ImageView
- from Java 5 to Java 6
java.awt.GridBagLayoutInfo javax.management.loading.MLetContent
ObjectStreamConstants
states a (for this list) correct @since JDK 1.1
, whereas others show the Java version they became public in their @since
tag. I guess their JavaDoc was added when they were made public, or maybe they have been rewritten? This must be true for java.text.Normalizer
which was available as package access class in Java 1.3, then dropped in Java 1.4 and came back as a public class in Java 1.6.The opposite change, from public to private class, is rare. I only found one class,
javax.management.timer TimerAlarmClockNotification
, which was removed from the public API in Java 6. It was added with Java 5 but marked as deprecated. I guess it came in with JMX 1.1 where it had been deprecated for some time.According to my list there are only two classes that ever disappeared completely from the JDK,
java.text.resources DateFormatZoneData
and javax.swing.beaninfo SwingBeanInfo
. The DateFormatZoneData
was inside i18n.jar
which was removed in 1.4 and SwingBeanInfo
even lived inside lib/dt.jar
, far away from jre/lib/rt.jar
, which indicated it had been created to be used by development tools only.21 March 2010
Horror of Time Accounting
Recently Uncle Bob mentioned that there is so much time wasted by developers on "fiddling around with time accounting tools". That's so true. During my 11 years of professional software development I had the opportunity to witness different approaches used by companies. I understand the need to monitor and control spent times but I hate wasting my own time on time accounting stuff. So I always try to minimise the time needed by automating the process of filling in the forms (if possible). Remember Terence John Parr's quote: "Why program by hand in five days what you can spend five years of your life automating?" :-)
Spread Yourself
Of course, if you have to track times of your tasks, you have to write them down. If you work on different tasks during the day then spreadsheets work best for that, especially if you spend some time automating them further. For example you may use VBA to write some Excel Macros. (Or even use some Ruby OLE magic.) Early in my career I was lucky to meet Andreas, who provided me with his detailed Excel sheet.
Since then I have been using it. It's just great. The most useful feature is adding a new line with one key shortcut (
Timed Overkill
Last year, while working for a big bank, I experienced the overkill of time accounting. There were three different systems. Project management used XPlanner to track task progress. XPlanner is a web application. I could have "remote controlled" it using plain HTTP
The third application was the hardest to "crack": A proprietary .NET application used by controlling. I spent (read wasted) some time reverse engineering server and database access. (.NET Reflector is a tool worth knowing. It is able to decompile .NET DLLs.) Then I started scripting the application with AutoIt. AutoIt is a freeware scripting language designed for automating Windows GUIs. Using
Recently I happened to use mite. It claims to be an "advanced, yet simple tool to help you get things done". Well that's true. It's easy to use and has only the most basic features needed for time accounting. More important there is an API for developers. Good. Even better there is an official Ruby library for interacting with the RESTful mite.api:
No need to change my good old spreadsheet. Using a new class
So what is the point of this blog post besides giving you some ideas how to automate time accounting? I want you to stop moaning about it! Take the initiative. Do something about it. Trust me, developing little automation scripts is fun and rewarding on its own.
Spread Yourself
Of course, if you have to track times of your tasks, you have to write them down. If you work on different tasks during the day then spreadsheets work best for that, especially if you spend some time automating them further. For example you may use VBA to write some Excel Macros. (Or even use some Ruby OLE magic.) Early in my career I was lucky to meet Andreas, who provided me with his detailed Excel sheet.
Since then I have been using it. It's just great. The most useful feature is adding a new line with one key shortcut (
CTRL-T
). The new line is filled with the current date and the current (rounded) time or the ending time of the last task. This makes entering new lines extremely fast: CTRL-T
, fill project and subproject number, add description. Andreas' sheet worked so well for everybody that it became the official time accounting tool for all employees later. (Thank you Andreas for your great spreadsheet!)Another company I worked with did not have any kind of time accounting. That was nice for a change, but I did not believe in it and kept using my macro infested Excel sheet. Later, when they started using their own tool based on Oracle Forms, I wrote a little application that imported the lines from my sheet into the database. It was quite simple (less than 100 lines) to read a CSV, do some translations and insert the data into a database. Unfortunately companies usually do not allow employees to write directly into their time accounting database (for obvious reasons).Timed Overkill
Last year, while working for a big bank, I experienced the overkill of time accounting. There were three different systems. Project management used XPlanner to track task progress. XPlanner is a web application. I could have "remote controlled" it using plain HTTP
GET
s and POST
s, but that would have been cumbersome. So I used Mechanize to create a small API to access XPlanner. The API is outlined by the following code, which was written for XPlanner version 0.6.2. To save space I removed all error handling. (The current XPlanner version features a SOAP interface, so remote controlling it gets even simpler.)class XplannerHttpOnce the API was in place, a simple script extracted the values from the spreadsheet (in fact from an FX input field) and booked it. I just used XPlanner ids as subprojects and copied the cumulated sums of my Excel into the input field.
PLANNER = "#{HOST}/xplanner/do"
def initialize
@agent = WWW::Mechanize.new
@agent.redirect_ok = true
@agent.follow_meta_refresh = true
end
# Login to XPlanner web application.
def login(user, pass)
login_page = @agent.get("#{PLANNER}/login")
authenticate = login_page.form_with(:name => 'login/authenticate')
authenticate['userId'] = user
authenticate['password'] = pass
authenticate['remember'] = 'Y' # set cookie to remain logged in.
authenticate.click_button
end
# Book for a task with _taskid_ (5 digits).
# _date_ is the date in "YYYY-MM-DD" format.
# _hours_ is the time in decimal hours in format "0,0".
def book_time(taskid, date, hours)
task_page = @agent.get("#{PLANNER}/view/task?oid=#{taskid}")
# open edit time
add_link = task_page.links.find do |l|
l.href =~ /\/xplanner\/do\/edit\/time\?/
end
edit_page = add_link.click
# add new times and submit
timelog = edit_page.form_with(:name => 'timelog')
c = timelog['rowcount'].to_i-1
timelog["reportDate[#{c}]"] = date
timelog["duration[#{c}]"] = hours.to_s
timelog["person1Id[#{c}]"] = '65071' # id of my XPlanner user
timelog.click_button
end
end
bot = XplannerHttp.newThe second application was used by the HR department to monitor working and extra hours. This was a web application using frame sets and Java Script. Mechanize does not (yet?) support Java Script, so I had to look somewhere else. I started remote controlling my browser by OLE automation, which turned out to be too difficult (for me). Fortunately other people have done a great job creating Watir, which does exactly that. So here we go (again without any error handling):
bot.login(USER, PASS)
text.split(/\n/).each do |line|
parts = line.scan(SUM_LINE).flatten
parts[1] = parts[1].sub(/,/, '.').to_f
bot.book_time(parts[2], parts[0], parts[1])
# => bot.book_time(65142, '2009-01-22', '1,5')
end
class StimeIeA little script would extract the first and last time of a day from the spreadsheet and call the bot:
def initialize
@browser = Watir::Browser.new
end
def login(user, pass)
@browser.goto("#{STIME}")
@browser.wait
login_frame = @browser.frame(:index, 4)
login_frame.text_field(:name, 'Num').set(user)
login_frame.text_field(:name, 'Password').set(pass)
login_frame.button(:name, 'Done').click
@browser.wait
end
# Book for a day with _date_ in format "DD.MM.YYYY".
# _from_ is the time of coming in "HH:MM" format.
# _to_ is the time of leaving in "HH:MM" format.
def book_time(date, from, to)
navigate_to 'Erfassung/Korrektur'
input_frame = @browser.frame(:index, 4)
if input_frame.text_field(:name, 'VonDatum').text != date
input_frame.text_field(:name, 'VonDatum').set(date)
@browser.wait
end
# add new times and submit
input_frame.text_field(:name, 'VonZeit').set(from)
input_frame.text_field(:name, 'BisZeit').set(to)
input_frame.button(:id, 'Change').click
input_frame.button(:id, 'Done').click
@browser.wait
end
# Logout and close the browser.
def close
navigate_to 'Abmelden'
@browser.close
end
def navigate_to(link_name)
nav_frame = @browser.frame(:index, 3)
nav_frame.link(:text, link_name).click
@browser.wait
end
end
bot = StimeIe.newA Nut to Crack
bot.login
times = ...
times.keys.sort.each do |date|
from_until = times[date]
bot.book_time(date, from_until.from, from_until.until)
# => bot.book_time('09.03.2009', '09:00', '18:00')
end
bot.close
The third application was the hardest to "crack": A proprietary .NET application used by controlling. I spent (read wasted) some time reverse engineering server and database access. (.NET Reflector is a tool worth knowing. It is able to decompile .NET DLLs.) Then I started scripting the application with AutoIt. AutoIt is a freeware scripting language designed for automating Windows GUIs. Using
win32ole
AutoIt can be scripted with Ruby:class ApolloMite
APOLLO_EXE = 'Some.exe'
APOLLO_WIN_NAME = 'APOLLO'
CTID = '[CLASS:WindowsForms10.SysTreeView32.app.0.19c1610; INSTANCE:1]'
def initialize
@autoit = WIN32OLE.new('AutoItX3.Control')
end
# Open with local Windows credentials.
def login
@autoit.Run(APOLLO_EXE)
# Wait for the exe to become active.
@autoit.WinWaitActive(APOLLO_WIN_NAME)
end
# Book for an _apo_ (5 digits and decimals).
# _text_ is the name of the sub-item.
# _date_ is the date in "DD.MM.YYYY" format.
# _hours_ is the time in full hours.
# _mins_ is the time in remaining minutes.
def book_time(apo, text, date, hours, mins)
# select the tree control
@autoit.ControlFocus(APOLLO_WIN_NAME, '', CTID)
# select the APO number
@autoit.Send(apo)
@autoit.Send('{RIGHT 5}')
...
# continue with a lot of boring code
# selecting controls and sending keystrokes.
end
def close
@autoit.WinClose(APOLLO_WIN_NAME)
@autoit.WinWaitClose(APOLLO_WIN_NAME)
end
end
Recently I happened to use mite. It claims to be an "advanced, yet simple tool to help you get things done". Well that's true. It's easy to use and has only the most basic features needed for time accounting. More important there is an API for developers. Good. Even better there is an official Ruby library for interacting with the RESTful mite.api:
mite-rb
. Excellent.No need to change my good old spreadsheet. Using a new class
MiteBot
instead of XplannerHttp
and friends the driver script looked quite similar.class MiteBotConclusion
# Mapping spreadsheet activities to mite project names.
CATEGORIES = {
:default => 'Main Product',
'Meeting' => 'Meeting',
...
}
def initialize
Mite.account = ACCOUNT # your mite. account
end
# Login to the mite web application with _key_
def login(key)
Mite.key = key
end
# Book a task in _category_ with detailed _comment_
# _date_ is the date in "DD.MM" format.
# _hours_ is the time in decimal hours in format "0,0".
def book_time(date, hours, comment, category='')
key = if category =~ /^.+$/ then category else :default end
proj_name = CATEGORIES[key]
# parse date
day, month = date.split(/\./).collect{ |i| i.to_i }.to_a
timestamp = Date.new(YEAR,month,day)
# parse hours
minutes = (hours.to_s.sub(',', '.').to_f * 60).to_i
# get project
proj = find_project_for(proj_name)
# create time entry
e = Mite::TimeEntry.new(:date_at => timestamp, :minutes => minutes,
:note => comment, :project_id => proj.id)
# add new times
e.save
end
# Find the project with _name_ and return it.
def find_project_for(name)
Mite::Project.all(:params => {:name => name}).first
end
end
# usage with data read from spreadsheet/FX input field
bot = MiteBot.new
bot.login(USER_KEY)
...
# => bot.book_time('15.12', '2,25', 'Introduction of Radar', '')
So what is the point of this blog post besides giving you some ideas how to automate time accounting? I want you to stop moaning about it! Take the initiative. Do something about it. Trust me, developing little automation scripts is fun and rewarding on its own.
Labels:
automation,
Mite,
Ruby,
time accounting,
XPlanner
9 March 2010
Code Quality Assurance
Last week I had the opportunity to give an one hour presentation on code quality assurance as part of the lecture on software testing to students of the Fachhochschule Technikum Wien. By "code quality assurance" I meant principles and techniques used by software developers to test their software and keep it free of bugs.
I believe that the most important ingredient of code quality is the mind-set of the developer. So I started with some slides about the Zero-Defect Mindset and Software Craftsmanship. Then I did a live demo performing the Prime Factors Code Kata to show the basics of unit testing, Test-Driven Development and regression testing. This was the main part of the presentation.
After that I explained the principles of code coverage, continuous integration, static code analysis and code reviews to the students. I mixed the theory (slides) with hands-on examples on the newly created Java code using EclEmma, Hudson, PMD and ReviewClipse.
Doing the demo was fun and the whole presentation was a success. For the demo I tried to stick to Scott Hanselman's Tips for a Successful Technical Presentation, esp. font size (Lucida Console, 16pt). Here is my "BigFonty" checklist:
I believe that the most important ingredient of code quality is the mind-set of the developer. So I started with some slides about the Zero-Defect Mindset and Software Craftsmanship. Then I did a live demo performing the Prime Factors Code Kata to show the basics of unit testing, Test-Driven Development and regression testing. This was the main part of the presentation.
After that I explained the principles of code coverage, continuous integration, static code analysis and code reviews to the students. I mixed the theory (slides) with hands-on examples on the newly created Java code using EclEmma, Hudson, PMD and ReviewClipse.
Doing the demo was fun and the whole presentation was a success. For the demo I tried to stick to Scott Hanselman's Tips for a Successful Technical Presentation, esp. font size (Lucida Console, 16pt). Here is my "BigFonty" checklist:
- Create a new, clean user profile for presentation only.
- Set icons to large and number of colours to maximum.
- Remove all icons from the desktop and choose a plain desktop background. I like to minimise all windows if I get lost between them.
- Disable any screen saver and turn off energy saving. Otherwise they will definitely activate at the most annoying moment.
- Set the command shell font to Lucida Console 16 point, bold, green on black. Have the default shell point to your main demo directory.
- Clean up the browser, remove unnecessary tool bars and symbols. Unfortunately, at least in Windows, new users always have tons of crap on the desktop and in the browser.
- Set the default browser page to empty or your main demo web-site.
- Set the font size in your browser to very large and enable override of font sizes in styles. This is done in some accessibility sub-menu.
- Use the browser in full screen mode (
F11
). You need all the space available for the large text. - Set the main font in your IDE to Lucida Console 16. In Eclipse it's enough to change the
Text Font
(in the Basic category in the sub-menu Colours and Fonts in Appearance). - Turn on line numbering in the IDE for quick reference of single lines.
- Maximise the IDE and use a full screen source window whenever possible. In Eclipse just press
Ctrl-M
to maximise a view. - Start all applications like IDE or any server before the presentation. They may take some time.
Update 20 April 2010
Student Feedback
Today I got the feedback evaluation from FH Technikum Wien. Several students mentioned my presentation as exciting and full of practical experience. :-) One called my presentation idiosyncratic - I don't mind, it definitely was. It's only weak point was that students were not able to study using the slides alone. Next time I will prepare some handouts with more information.Update 8 May 2010
German Podcast of Presentation
I finally managed to post-process the (German) audio stream of the talk and combine it with the slides. Watch the quality assurance podcast (in German). It's still missing the demos, but my explanations should give you a general idea what's going on.
Labels:
podcast,
presentation,
quality,
test-driven,
unit testing
14 February 2010
Turbo Pascal Prime Factors Kata
Recently I started performing the Prime Factors Kata. After playing with Java and Ruby I had the weird idea of performing it in every programming language I ever knew. Well, maybe not every - I don't plan to use 6502 or 80x86 assembler, that wouldn't be fun. But I already did it in BASIC. Going forward in time Turbo Pascal would be next. So it's time for another retro post ;-)
Where Is My TPUnit?
I couldn't find any unit testing framework for Turbo Pascal. The closest thing I could find was FPCUnit packaged with Free Pascal. Unfortunately it's not compatible with good old TP. So I had to roll my own. I started with some minimalist infrastructure.
Having a simple TPUnit in place, it's time for the kata itself. The seven test methods of
(Download full source)
Where Is My TPUnit?
I couldn't find any unit testing framework for Turbo Pascal. The closest thing I could find was FPCUnit packaged with Free Pascal. Unfortunately it's not compatible with good old TP. So I had to roll my own. I started with some minimalist infrastructure.
TYPE TestCase = OBJECT PROCEDURE AssertEquals(msg:String; expect, act:Longint); PROCEDURE AssertNil(msg:String; act:Pointer); { other asserts ... } PROCEDURE Fail(msg:String); { TestCase } PROCEDURE SetUp; VIRTUAL; PROCEDURE TearDown; VIRTUAL; END; PROCEDURE TestCase.AssertEquals(msg:String; expect, act:Longint); VAR ex, ac:String; BEGIN IF expect <> act THEN BEGIN Str(expect, ex); Str(act, ac); Fail(Concat(msg,' expected ',ex,' but was ',ac)); END; END; ... PROCEDURE TestCase.Fail(msg:String); BEGIN TearDown; Writeln(' - FAILED'); Writeln(msg); Halt(1); END; PROCEDURE TestCase.SetUp; BEGIN END; PROCEDURE TestCase.TearDown; BEGIN END;Subclasses may overwrite
SetUp
or TearDown
, add test methods and call Assert
s. To keep it simple the first failed assertion stops program execution. What's missing is some kind of procedure RunTest
that would wrap a particular test method inside calls to SetUp
and TearDown
. Hmm - function variables might be handy here. In case you are not familiar with them, here is an example:{$F+} {needs far calls for function variables} TYPE FuncVar = PROCEDURE; PROCEDURE FancyMethod(method:FuncVar); BEGIN method; { invokes the method } END; PROCEDURE SomeMethod; ... VAR v:FuncVar; BEGIN v := SomeMethod; FancyMethod(v); END.Unfortunately Turbo Pascal does not allow class methods (e.g.
TestCase.AssertEquals
) to be used as function variables. (At least I couldn't figure.) Obviously self
is an implicit parameter of all such methods. Well not obviously, but analysing the generated machine code helps ;-)SomeMethod; { -> 0E E8 D1 FB } push CS { because of $F+ } call fbe1 { address of SomeMethod in CS } cls.ClassProc; { -> BF 70 00 1E 57 0E E8 ED FB } mov DI, #0070 { address of object cls in DS } push DS push DI { first parameter is self } push CS call fbed { address of ClassProc in CS }Using this knowledge
TestCase.RunTest
is implemented a bit dirty using an untyped Pointer
argument:PROCEDURE CallClassPtr(pt:Pointer; VAR cls:TestCase); VAR s,o:Word; BEGIN s := Seg(cls); o := Ofs(cls); ASM mov DI, [o] mov AX, [s] push AX push DI call [pt.dword] END; END; PROCEDURE TestCase.RunTest(name:String; testMethod:Pointer); BEGIN Write('TEST ', name); SetUp; CallClassPtr(testMethod, self); TearDown; END;The Prime Factors Kata
Having a simple TPUnit in place, it's time for the kata itself. The seven test methods of
PrimeFactorsTest
,PROCEDURE PrimeFactorsTest.Run; BEGIN RunTest('TestOne', @PrimeFactorsTest.TestOne ); RunTest('TestTwo', @PrimeFactorsTest.TestTwo ); RunTest('TestThree', @PrimeFactorsTest.TestThree ); RunTest('TestFour', @PrimeFactorsTest.TestFour ); RunTest('TestSix', @PrimeFactorsTest.TestSix ); RunTest('TestEight', @PrimeFactorsTest.TestEight ); RunTest('TestNine', @PrimeFactorsTest.TestNine ); END;yield
FUNCTION TPrimeFactors.generate(i:Longint):ArrayListPtr; VAR factors:ArrayListPtr; candidate:Longint; BEGIN factors := new(ArrayListPtr, Init); FOR candidate := 2 TO i DO BEGIN WHILE i MOD candidate = 0 DO BEGIN factors^.Add(candidate); i := i DIV candidate; END; END; generate := factors; END;
ArrayListPtr
is a pointer to a variable sized, user defined list backed by an array of Longint
s, similar to Java's ArrayList<Integer>
. I can't deny I'm a Java guy. Everything I code looks like Java :-) (Probably I would have used a linked list back then instead of a complex object. Something likeTYPE PrimeFactorPtr = ^PrimeFactor; PrimeFactor = RECORD value:LongInt; next:PrimeFactorPtr; END;Still the procedure body looks the same and the kata does not change much.)
(Download full source)
Labels:
kata,
Prime Factors,
retro,
Turbo Pascal,
unit testing,
xUnit
13 February 2010
Testing For All One's Worth
In the end of last year, after a long break, the third part of the 'Code Cop' series has been published in the well known German magazine iX:
Tägliche Builds mit automatisierten Tests (Daily Builds with Automated Testing) (iX 1/2010). [... Automated testing is vital for the quality assurance. Unit-tests are applied easily using JUnit. The same is true for functional testing thanks to a number of already existing tools. By adding testing capabilities to the build, developers are more willing to write tests. In the end the analysis of the code coverage achieved by the tests reveals some possible weak points. ...]
(Download source code of Ant/JUnit/HttpUnit and EMMA integration.)
References
(List of all my publications with abstracts.)
Tägliche Builds mit automatisierten Tests (Daily Builds with Automated Testing) (iX 1/2010). [... Automated testing is vital for the quality assurance. Unit-tests are applied easily using JUnit. The same is true for functional testing thanks to a number of already existing tools. By adding testing capabilities to the build, developers are more willing to write tests. In the end the analysis of the code coverage achieved by the tests reveals some possible weak points. ...]
(Download source code of Ant/JUnit/HttpUnit and EMMA integration.)
References
- SourceForge.net, EMMA
- P. Duvall, Continuous testing, IBM developerWorks, 2007
- A. Glover, In pursuit of code quality: Don't be fooled by the coverage report, IBM developerWorks, 2006
- R. Gold, HttpUnit
- R. Miller, The right (XP) tool for the job, IBM developerWorks, 2003
(List of all my publications with abstracts.)
14 January 2010
Prime Factors Kata BASIC
The Prime Factors Kata is a small coding exercise first shown by Uncle Bob in Java several years ago. It has been done in C#, Ruby and probably some other languages. Recently Uncle Bob performed it as katacast in Ruby. (Go watch it! It's really cool. I will wait here.) He was followed by others in C# and Groovy. Anyway I'm sure it has not been done in BASIC. (Since my last post about Scala and BASIC I've felt so "retro".) So here is my Prime Factors Kata done in BASIC V2 (Commodore 64). I couldn't use my Scala BASIC DSL because it's lacking necessary features like
The First Test.
The Fifth Test.
The Sixth Test.
The Seventh Test.
Another Test.
One last refactoring to add colours and time measurements to methods "setup" (line 300) and "teardown" (line 200). Two more tests to see the performance of larger (32719) values of
GOSUB
. So I used the Versatile Commodore Emulator (VICE) instead.The First Test.
1000 CLR 1010 PRINT "test one "; 1020 i=1 1030 GOSUB 9000 1040 IF pf(0)<>0 THEN PRINT "expected no factors" : STOP 1050 PRINT "green"Number
i
is the value to be factored and the array pf
is expected to contain the number of prime factors in pf(0)
followed by the factors themselves on return of the "generate" function (line 9000).test one ?UNDEF'D STATEMENT ERROR IN 1030Add the function.
8990 END 9000 REM ----- function generate 9010 REM in ... i ... number 9020 REM out ... pf() ... factors 9030 RETURNRun the test.
test one greenThe Second Test.
1100 CLR 1110 PRINT "test two "; 1120 i=2 1130 GOSUB 9000 1140 IF pf(0)<>1 THEN PRINT "expected 1 factors:";pf(0) : STOP 1150 IF pf(1)<>2 THEN PRINT "expected factor 2:";pf(1) : STOP 1160 PRINT "green"
test one green test two expected 1 factors: 0 BREAK IN 1140
9000 REM ----- function generate 9030 IF i=1 THEN RETURN 9040 pf(0)=1 9050 pf(1)=2 9060 RETURNUnfortunately there is no such thing as "BUnit". So I have to create testing infrastructure along the way. Keeping the tests green helps during the extraction of an "assert" method (line 100).
80 GOTO 1000 90 REM ***** test infrastructure ***** 100 REM ----- method assert equals(int) 110 REM in ... me$ ... message 120 REM in ... ex ... expected 130 REM in ... ac ... actual 140 IF ex=ac THEN RETURN 150 PRINT "red" 160 PRINT me$;" expected";ex;" but was";ac 170 STOP 180 RETURN ... 1100 CLR 1110 PRINT "test two "; 1120 i=2 1130 GOSUB 9000 1140 ex=1 : ac=pf(0) : me$="num factors" : GOSUB 100 1150 ex=2 : ac=pf(1) : me$="1st factor" : GOSUB 100 1160 PRINT "green"The Third Test.
1220 i=3 1230 GOSUB 9000 1240 ex=1 : ac=pf(0) : me$="num factors" : GOSUB 100 1250 ex=3 : ac=pf(1) : me$="1st factor" : GOSUB 100Run the test.
test one green test two green test three red 1st factor expected 3 but was 2 BREAK IN 170Modify the function.
9050 pf(1)=iGreen again. After the third test it's getting boring. The tests should be refactored to be more DRY:
200 REM ----- method teardown 210 PRINT "green" 220 RETURN 300 REM ----- method setup 310 REM in ... me$ ... test name 320 PRINT "test ";me$;" "; 330 RETURN 400 REM ----- method assert prime factors 410 READ me$ 420 GOSUB 300 430 READ i 440 GOSUB 9000 450 READ af 460 ex=af : ac=pf(0) : me$="num factors" : GOSUB 100 470 IF af=0 THEN GOTO 520 480 FOR j=1 TO af 490 READ ex 500 ac=pf(j) : me$=STR$(j)+". factor" : GOSUB 100 510 NEXT 520 GOSUB 200 530 RETURN 990 REM ***** test cases ***** 1000 DATA "one", 1, 0 1010 GOSUB 400 1100 DATA "two", 2, 1, 2 1110 GOSUB 400 1200 DATA "three", 3, 1, 3 1210 GOSUB 400The Fourth Test.
1300 DATA "four", 4, 2, 2, 2 1310 GOSUB 400
9000 REM ----- function generate 9010 REM in ... i ... number 9020 REM out ... pf() ... factors 9025 REM local ... nf ... number factors 9030 nf=0 9040 pf(0)=nf 9050 IF i=1 THEN RETURN 9060 IF INT(i/2)*2=i THEN nf=nf+1 : pf(nf)=2 : i=i/2 : GOTO 9040 9070 nf=nf+1 : pf(nf)=i : i=1 : GOTO 9040 9080 RETURNLine 9070 is more than needed to get the fourth test green, but it's the first thing that came to my mind.
The Fifth Test.
1400 DATA "five", 6, 2, 2, 3Works as well, no changes needed.
The Sixth Test.
1500 DATA "six", 8, 3, 2, 2, 2Again this still works, I "cheated" a bit.
The Seventh Test.
1600 DATA "seven", 9, 2, 3, 3Now I really need the nested loops.
9000 REM ----- function generate 9010 REM in ... i ... number 9020 REM out ... pf() ... factors 9030 REM mod ... ca ... pf candidate 9040 pf(0)=0 : REM special case 9050 IF i=1 THEN RETURN 9060 IF INT(i/2)*2<>i THEN GOTO 9110 9070 pf(0)=pf(0)+1 9080 pf(pf(0))=2 9090 i=i/2 9100 GOTO 9050 9110 FOR ca=3 TO INT(SQR(i)) STEP 2 9120 IF i=1 THEN RETURN 9130 IF INT(i/ca)*ca<>i THEN GOTO 9180 9140 pf(0)=pf(0)+1 9150 pf(pf(0))=ca 9160 i=i/ca 9170 GOTO 9120 9180 NEXT 9190 RETURNThis includes already two performance optimisations: Handling the special case 2 up front to be able to use
STEP 2
and skip every second prime factor candidate. Second using the square root of i
as upper bound for the loop. But there is a bug not covered by the tests. Can you spot it? Yet another refactoring to remove duplicate code of adding prime factors to pf
.9040 pf(0)=0 : ca=2 : REM special case 9050 IF i=1 THEN RETURN 9060 IF INT(i/ca)*ca=i THEN GOSUB 9200 : GOTO 9050 9070 FOR ca=3 TO INT(SQR(i)) STEP 2 9080 IF i=1 THEN RETURN 9090 IF INT(i/ca)*ca=i THEN GOSUB 9200 : GOTO 9080 9100 NEXT 9110 RETURN 9200 pf(0)=pf(0)+1 9210 pf(pf(0))=ca 9220 i=i/ca 9230 RETURNI could still get rid of the duplicate lines 9060 and 9090...
Another Test.
1700 DATA "eight", 10, 2, 2, 5This reveals the bug in line 9070. If the value contains different prime factors then the last factor is lost. We need another check in line 9110.
9100 NEXT 9110 IF i>1 THEN ca=i : GOSUB 9200 9120 RETURNEND.
One last refactoring to add colours and time measurements to methods "setup" (line 300) and "teardown" (line 200). Two more tests to see the performance of larger (32719) values of
i
. The function even works for 2^31-1
(huge), but takes quite some time, although the loop is already optimised.(Download the source.)13 January 2010
T-shirts, T-shirts, T-shirts
My Personal Branding
Here it is - finally - the official Code Cop T-shirt. In fact it's not just a shirt, but a whole series of variations: There is a black on blue version that has proper contrast and one with a navy-coloured logo which is a bit more discreet.
My favourite one is the cosy version of the blue 'Code Cop' shirt. The large logo on the front and the URL on the back are produced using Flock Print. There is a female version - I just don't know when to stop! They all look great and I had each of them produced and sent to me to inspect.
More Freaky
Who needs such T-shirts? Well, geeks need cool shirts and Jeff has one, too. If you like freaky shirts as much as I do, then you will probably love my coding related quotes shirts. I like to use (more or less) ingenious quotes to bother my colleagues with either small advice or mild criticism. One of my favourite quotes for crappy but fast developed code is Uncle Bob's "rushingtogetawholebunchofshitcodedandsortofrunning". Wearing this shirt you don't even have to say it out loud. ;-)
Talking of coding: What's your favourite design pattern? Mine is the Singleton pattern. No, I'm joking, I hate it. Alex Miller hates it too and explained some time ago why it's so hateful. (At my previous working place one third of all classes was dependant on singletons. It was awful.) I can't stand them. I'm getting sick when I see one. Therefore I'm sure that Singletons Are Evil.
(Buy Code Cop shirts)
Here it is - finally - the official Code Cop T-shirt. In fact it's not just a shirt, but a whole series of variations: There is a black on blue version that has proper contrast and one with a navy-coloured logo which is a bit more discreet.
My favourite one is the cosy version of the blue 'Code Cop' shirt. The large logo on the front and the URL on the back are produced using Flock Print. There is a female version - I just don't know when to stop! They all look great and I had each of them produced and sent to me to inspect.
More Freaky
Who needs such T-shirts? Well, geeks need cool shirts and Jeff has one, too. If you like freaky shirts as much as I do, then you will probably love my coding related quotes shirts. I like to use (more or less) ingenious quotes to bother my colleagues with either small advice or mild criticism. One of my favourite quotes for crappy but fast developed code is Uncle Bob's "rushingtogetawholebunchofshitcodedandsortofrunning". Wearing this shirt you don't even have to say it out loud. ;-)
Talking of coding: What's your favourite design pattern? Mine is the Singleton pattern. No, I'm joking, I hate it. Alex Miller hates it too and explained some time ago why it's so hateful. (At my previous working place one third of all classes was dependant on singletons. It was awful.) I can't stand them. I'm getting sick when I see one. Therefore I'm sure that Singletons Are Evil.
(Buy Code Cop shirts)
Subscribe to:
Posts (Atom)