30 August 2019

Visualising Architecture: Neo4j vs. Module Dependencies

Last year I wrote about some work I did for a client using NATURAL (an application development and deployment environment using a proprietary language), for example using NATstyle, adding custom rules and creating custom reports. Today I will share some things I used to visualise the architecture. Usually I want to get the bigger picture of the architecture before I change it.

Industrial LegacyDependencies are killing us
Let's start with some context: This is Banking with some serious legacy: Groups of "solutions" are bundled together as "domains". Each solution contains 5.000 to 10.000 modules (files), which are either top level applications (executable modules) or subroutines (internal modules). Some modules call code of other solutions. There are some system "libraries" which bundle commonly used modules similar to solutions. A recent cross check lists more than 160.000 calls crossing solution boundaries. Nobody knows which modules outside of one's own solution are calling in and changes are difficult because all APIs are potentially public. As usual - dependencies are killing us.

Graph Database
To get an idea what was going on, I wanted to visualize the dependencies. If the data could be converted and imported into standard tools, things would be easier. But there were way too many data points. I needed a database, a graph database, which should be able to deal with hundreds of thousand of nodes, i.e. the modules, and their edges, i.e. the directed dependencies (call or include).

Extract, Transform, Load (ETL)
While ETL is a concept from data warehousing, we exactly needed to "copy data from one or more sources into a destination system which represented the data differently from the source(s) or in a different context than the source(s)." The first step was to extract the relevant information, i.e. the call site modules, destination modules together with more architectural information like "solution" and "domain". I got this data as CSV from a system administrator. The data needed to be transformed into a format which could be easily loaded. Use your favourite scripting language or some sed&awk-fu. I used a little Ruby script,
ZipFile.new("CrossReference.zip").
  read('CrossReference.csv').
  split(/\n/).
  map { |line| line.chomp }.
  map { |csv_line| csv_line.split(/;\s*/, 9) }.
  map { |values| values[0..7] }. # drop irrelevant columns
  map { |values| values.join(',') }. # use default field terminator ,
  each { |line| puts line }
to uncompress the file, drop irrelevant data and replace the column separator.

Loading into Neo4j
Neo4j is a well known Graph Platform with a large community. I had never used it and this was the perfect excuse to start playing with it ;-) It took me around three hours to understand the basics and load the data into a prototype. It was easier than I thought. I followed Neo4j's Tutorial on Importing Relational Data. With some warning: I had no idea how to use Neo4j. Likely I used it wrongly and this is not a good example.
CREATE CONSTRAINT ON (m:Module) ASSERT m.name IS UNIQUE;
CREATE INDEX ON :Module(solution);
CREATE INDEX ON :Module(domain);

// left column
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///references.csv" AS row
MERGE (ms:Module {name:row.source_module})
ON CREATE SET ms.domain = row.source_domain, ms.solution = row.source_solution

// right column
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///references.csv" AS row
MERGE (mt:Module {name:row.target_module})
ON CREATE SET mt.domain = row.target_domain, mt.solution = row.target_solution

// relation
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///references.csv" AS row
MATCH (ms:Module {name:row.source_module})
MATCH (mt:Module {name:row.target_module})
MERGE (ms)-[r:CALLS]->(mt)
ON CREATE SET r.count = toInt(1);
This was my Cypher script. Cypher is Neo4j's declarative query language used for querying and updating of the graph. I loaded the list of references and created all source modules and then all target modules. Then I loaded the references again adding the relation between source and target modules. Sure, loading the huge CSV three times was wasteful, but it did the job. Remember, I had no idea what I was doing ;-)

Querying and Visualising
Cypher is a query language. For example, which modules had most cross solution dependencies? The query MATCH (mf:Module)-[c:CALLS]->() RETURN mf.name, count(distinct c) as outgoing ORDER BY outgoing DESC LIMIT 25 returned
YYI19N00 36
YXI19N00 36
YRWBAN01 34
XGHLEP10 34
YWI19N00 32
XCNBMK40 31
and so on. (By the way, I just loved the names. Modules in NATURAL can only be named using eight characters. Such fun, isn't it.) Now it got interesting. Usually visualisation is a main issue, with Neo4j it was a no-brainer. The Neo4J Browser comes out of the box, runs Cypher queries and displays the results neatly. For example, here are the 36 (external) dependencies of module YYI19N00:

Called by module YYI19N00
As I said, the whole thing was a prototype. It got me started. For an in-depth analysis I would need to traverse the graph interactively or scripted like a Jupyter notebook. In addition, there are several visual tools for Neo4J to make sense of and see how the data is connected - exactly what I would want to know about the dependencies.

20 August 2019

Who should go on journeyman tour

Reisender Geselle im GebirgeIn our recent, bi-weekly software engineering podcast we talked about my Journeyman Tour. In particular Christian wanted to know who should go on such a tour. We discussed for a while, in fact I was talking most of the time and interrupting everybody ;-) It was kind of a hard question and I had not thought about it before, some new ideas came up.

Listen to Developer Melange: Who should do a journeyman tour?

19 August 2019

Y U NO TDD

Y U No TDDDuring this year's GeeCON the crew organised an Open Space evening. (An Open Space is a self-organising meeting where the agenda is created by the people attending.) I participated and ran a session on the question why we are not doing Test Driven Development. (Y U No Do TDD?) I am running TDD trainings from time to time and wanted to get more insight where people are stuck with TDD.

Context
As I said, an Open Space is self organising, and only people interested in TDD attended my session. This is a typical problem of communities of practice - only people interested in the topic attend - which results in us living in a bubble. For example long time TDD practitioner Thomas Sundberg and Shirish Padalkar, lead consultant at ThoughtWorks, participated in the discussion. Depending on the background of each individual participant, my original question was understood as:
  • Why are you not doing TDD on production work at all?
  • Why are you not doing TDD most of the time?
  • Why are you not doing TDD all the time?
I collected the reasons not to do TDD during the session which I want to share here. Text inside quotation marks, e.g. "hi" quotes exactly what people said. While the previous three questions are slightly different, the reasons seem to be similar. I grouped the answers. I did not want to contradict or debunk these answers and have to hold back not to do so ;-)

Prototyping
I am "experimenting with something", the "expected outcome is unclear" and "it's only a prototype". Obviously these are valid reasons as Spikes are outside of TDD. These answers usually coming up quickly makes me wonder if they are kind of excuses sometimes. Experimenting with new libraries and APIs is covered further down, so what are we experimenting with? I worked with many developers who would agree that the expected outcome of their current ticket was unclear - because they did not take the time to analyse the story and understand the solution they were supposed to build? Additionally most of our prototypes go to production after all, don't they ;-)

Time Pressure
Another reason - given by some of my clients too - is their need to go fast: "I need to go very fast", there is "no time for that" and I "believe to be faster without it". While they might be wrong in the long term I understand the effects of pressure. One person made it more explicit, while he has no strong deadline, he said "I have a huge backlog, I am stressed". Indeed when I am extremely stressed, I find it hard to maintain a structured approach, especially if a lot of task switching is involved. Besides the needed skill to apply TDD under high load, much discipline is required to endure pressure. In such situations Strong Opinions and Dogma might help.

Missing the Bigger Picture
I am just "writing a script for myself". Maybe there is no need for automated tests when writing a one time script for myself. TDD has a testing aspect - and it has many other aspects like designing software, fast feedback and working incrementally. TDD is not only about testing. Some people miss that or have only partial understanding of the benefits or do not care for these benefits at the moment. The opposite reason is "because I know how the class will look like". Yes TDD is about software design as I said before, and I would like my class to work, too. Some people only want the fast feedback, e.g. using REPL based development and "looking at UI is faster".

Missing Priority on Testing
When starting with TDD, the testing aspect is most visible. After all we have to write a reasonable test first. For teams and organisations with low or missing priority on testing, people are "looking down on testing" and I got answers like "testing is a culture thing", "testing is not a first class activity" and "I am not asked to create a test by my project manager". Indeed it is hard to keep following TDD if it is looked down upon and if there is no time for quality work.

Avoiding Context Switches
There is a certain amount of context switching involved in TDD. Similar to Edward de Bono's Six Thinking Hats, we have different states which we have to be mindful of and which call for different actions. George Dinwiddie created a TDD Hat to show that. Maybe this switching is "not natural for some people". "I don't want to interrupt creative design with verification" and "I prefer staying in building hat and not change to testing hat". Similar one participant said that it is "easy to write code, harder to write tests, so I do it afterwards". I understand and there is certainly an urge to jump into the code and get hacking. I rarely feel that urge and I enjoy pair programming using the Ping Pong style because it enforces the separation of states without any (inner) discussion.

Missing TDD Skills
This is obviously the largest area and there is nothing wrong with not knowing how to apply Test Driven Development: Honest people just say "I can't do it". Many are aware of this problem and seem to be disappointed with existing material and/or look for more material to study TDD: "It is not taught at universities", "there are no good books" and "I am missing real examples". I know from my own experience that TDD is not easy to learn and some people are "scared for life after a bad experience" with it. Now the best way to learn TDD is to have someone show you while pairing with you. Even if there is no pair programming in your workplace, you can still experience it during a Coding Dojo or Coderetreat. Short of that, I recommend Kent's Beck Test Driven Development by Example, which is a short and excellent introduction.

New Language or Library
When discussing TDD and unit testing with a client, he said I "don't know the target technology" and "React is a new technology for us". I had to laugh. To me this sounded like "I got a car and know how to drive forward, but am not able to drive backwards." On the other hand I live at the dead end of a road and I see drivers working really hard to avoid driving backwards. Are they not able to do it? So maybe stopping halfway in the game (of skill acquisition) is natural after all. When working with some new language or working with an unknown API, I specially rely on tests to support me, these are Learning Tests.

It's too hard to test
I agree some things are harder to test than others. "Android is hard to test", "Vaadin is hard to test" and "some libraries are hard to test". (I have not worked with Android or Vaadin, I quote people.) We might need to know more about design to decouple things. This is definitely true for legacy code, as "existing code is usually hard to test". Some people see the root cause, like in "I don't know how to manage boundaries". In such situations we need (to know) more tooling. We definitely "need more tooling to test the UI" as UI is traditionally considered hard to test from a TDD perspective. Still, Steve Freeman and Nat Pryce, authors of Growing Object-Oriented Software Guided by Tests, always start their TDD (outer) loop with an UI test. GOOS is a great book and I recommend reading it if you want to go deeper into TDD.

It's too simple to test
If there are things which are too hard to test, there must also be things which are too simple to test, right? It is "useless to test, it is so simple" and it "makes no sense to test it". Maybe a better description is that it is "unclear what is important to test". From a TDD perspective no such things exist and I guess these reasons arise from the test after process, when looking at each public method and thinking how to test it. Further excessive test isolation, see Solitary vs. Sociable Unit Tests, will cause that.

Barriers to TDD adoption
Here is Matt Wynne's summary of Barriers to TDD adoption from a session during Lean Agile Scotland 2016. I recommend checking out the Twitter thread as Matt added detail discussions on temptation of fast reward, permission and safety to learn, "the egotist" and other reasons not covered by me.

Barriers to TDD adoption #lascot16 (C) Matt Wynne
What about test-induced design damage?
Maybe the only real reason not to do TDD is to keep the design integrity of your system. This idea was started back in 2014 by David Heinemeier Hansson, also known as DHH, and led to the whole Is TDD Dead? debate. DHH said that when using TDD code sometimes suffers tremendous design damage to achieve two testing goals: Faster tests and easy-to-mock unit tested controllers and that the design integrity of the system is far more important than being able to test it any particular layer. It is ironic that this never comes up during any group discussion or team interview. Probably because it is an expert level reason. If you followed the debate, DHH knew TDD, he used it for some time and liked it. And then, only then, did he know when not to apply it.

17 August 2019

In Memory of Ruby

Meet Ruby, our dog. Ruby was a Great Dane. If you are not into dogs, Great Danes are known for their size, being one of the largest dog breeds in the world. Great Danes are also known for their friendly nature and are often referred to as a "gentle giants".

Great Dane Ruby
Ruby lived well beyond the average lifespan of her breed of six to eight years and passed away last month with an age of ten. She died of old age in our arms. This was - and still is - a sad and painful situation which only dog lovers can understand.

(And of course, Ruby was named after the Ruby programming language, my favourite programming language of that time.)