The Nature of Software Development

september 21st, 2014

Great book on a difficult topic: What software development should be like for the best possible outcome for both developers, company, and customers.

When you have tasted freedom, and revered the respect that you probably know what you’re doing. When you have tried the true aspects of autonomous teams it is the more detrimental to have to work with a micromanaging and secretive project management hierarchy.

But what are the odds that such companies will actually read, understand, and implement the insights in the book?

Hopefully this book will instill trust in the rigid companies insisting that waterfall process with a single release is the only way. In that sense software remains invisible as the clothes in The Emperor’s New Clothes or The Emperor’s New Software.

ISBN: 978-1-94122-237-9

This is in regards to the Beta 1.0 release of the e-book  release on 2014-09-08 (in ISO 8601 format)

The odds of getting it right

august 30th, 2014

While it is easy to point out when people are getting things wrong – IMHO or in your opinion – it may serve a greater purpose to examine why things may go so utterly wrong as they often do, especially when we’re speaking about software development.

Software development is mostly about communication. Whether it is communicating with a programmer to make what you want, or it is telling a project manager to get them to tell a programmer what you want – it is in any case a matter of communicating vision to understanding.

So let us try to map out the different possibilities when facing a decision – or what may seem clear to you, but isn’t for at least one of the links in the development chain.

binary tree

binary tree

I have chosen a binary tree to depict the decision “right” or “wrong”. While normal interpretation of such a tree is that it is a 50/50 split, let us not make such a hasty assumption – at least we – as developers – should be better than a 50% guess at understanding customer requirements.

In the binary tree above there are only 4 decisions  which has to be right. If we simplify the model to have a fixed probability, p, that we make the right decision, we can use Bernoulli’s binomial distribution to determine the odds of making s successes in as many trials. In this case the binomial distribution deteriorates into a simple power function, ps.

Given either p or s we can calculate the other if we want at least a 50% chance of ending with a right solution.

Let us try that with a 6-sigma probability – p = 0.9999966.

s log p = log (.50) <=> s = log(.50) / log(p)

s = log(.50) / log(.9999966) ~  203867

That is, if we have an almost unheard of quality for understanding customer communication, then at a bit more than 200,000 decisions, the solution has a 50/50 chance of hitting the anticipated solution.

If we want to be 90% sure, then we cannot make more than 30988 decisions with 6-sigma understanding.

So, let us try the other way around – we would like to know with a sufficiently high confidence that our project meets our expectations, let us say 90% sure. We have identified 10,000 key decisions. How good must the communication then be?

s log(p) = log(x) <=> p = exp(log(x)/s)

p = exp(log(.90)/10000)  = 0.999989

Which means we need 6-sigma communication to achieve this goal.

On top of all this, then the calculations are assuming that the customer knows and communicates exactly what he or she wants, and that all decision points are uncovered and communicated at the same high level.

The only immediate sane solution to improving the odds is to reduce the scope drastically. It may sound silly, but more having to fulfill 10 things right becomes daunting for most of us. In the binary tree above, we would need to have 2 (10+1) -1= 2047 nodes – the sheer size of such a tree should be sufficient to deter anyone wanting more than 10 decisions.

Reduce scope. Improve communication by shortening the feedback loop.

Naturally, we could reduce scope right down until a single decision – but that would quickly throw us off balance, as a single point makes it impossible to determine direction.

Waiting on IoT

juli 31st, 2014

Sometimes I just can’t wait for the world to catch up, for everyday items to adopt to Internet of Things (IoT).

From 2015 you’re obliged by Danish law to secure your license plates for your car with 2 screws – not that it will prevent theft, just that it will make it more difficult, because as we all know, then using a screwdriver is beyond most peoples capabilities. – Sorry, but stupid laws really deserve sarcasm and contempt.

If you don’t abide by the law, you can be fined an estimated 1000 DKK ~ 180 USD

Let’s see which other options might have been available.

We want the license plate to make it possible to easily identify the car for purposes of ownership, theft, debt, and insurance. The license plate will be tied to the cars model, registration number, and the owner.

If the car’s on-board system could know these things, then it could fill out an e-ink license plate, and it could color code the plate on basis of debt, reported theft, warrants, or other such interests. Sure an e-ink license plate would be hacked and you’d see people driving around with Pong being played on their plates.

If the car had a GPS it could even report its location when reported stolen.

If the car could only be started by a smartphone app, then it would be possible to tell whose phone was used to start the car. As an added bonus you could lease a key to someone else and revoke it when they weren’t allowed to drive anymore, e.g. a valet key.

Toll roads could be billed automatically without the need to stop up or have another device in the car.

I know – you could have Big Brother monitor every car, fine everyone speeding, track every ghost driver, every freak occurrence, but then, how bad would that be?

I know – you can’t push technology into older cars. I’m just waiting for some meaningful adoption of – IMHO – sensible things.

Bad Teaching

juni 29th, 2014

Someone is wrong on the internet!

Someone is wrong on the internet
(Visit for a lot more of similar stuff)

Making too many mistakes while trying to teach a concept is worse than not teaching at all.

Don’t get me wrong – I admire the people making an effort to teach and
especially when it is about how to program. But just as enthusiastic I am about those who can, I am frustrated and angry with those who really can’t. Unfortunately there are plenty of the those who can’t who try – that is most likely due to the Dunning-Kruger effect

For some odd reason I stumbled upon one of these bad teaching resources,

I became so furious with the content. A teacher should know better. A teacher should do better.

In an effort to teach inheritance of Object Oriented Programming, specifically for Java, the author takes a simple example and by doing so violates principles of design and good practices.

Concrete vs. Abstract

One of the benefits of inheritance is the ability to use the interface or in case of a missing interface then the super class to abstract the

In the code, MethodOverridingMain lines 9-12 the declared objects (left hand side) should be Employee.

package org.arpit.javapostsforlearning;

public class MethodOverridingMain {

    public static void main(String[] args) {
        Employee d1 = new Developer(1, "Arpit", 20000);
        Employee d2 = new Developer(2, "John", 15000);
        Employee m1 = new Manager(1, "Amit", 30000);
        Employee m2 = new Manager(2, "Ashwin", 50000);

        System.out.println("Name of Employee:" + d1.getEmployeeName() + "---"
                + "Salary:" + d1.getSalary());
        System.out.println("Name of Employee:" + d2.getEmployeeName() + "---"
                + "Salary:" + d2.getSalary());
        System.out.println("Name of Employee:" + m1.getEmployeeName() + "---"
                + "Salary:" + m1.getSalary());
        System.out.println("Name of Employee:" + m2.getEmployeeName() + "---"
                + "Salary:" + m2.getSalary());

This is particular useful as the subclasses don’t provide additional methods. Now every employee can be thought of as an Employee.

Violation of Liskov Substitution Principle

When working with hierarchies – which is a natural part of inheritance – then it is important to adhere to best practices such as Liskov Substitution Principle (LSP), which states that if a program module is using a Base class, then the reference to the Base class can be replaced with a Derived class without affecting the functionality of the program module.

Why is this important? It allows any developers using your source code as a library to reduce the cognitive load to only be concerned with the base class, which is another reason why you should program to an interface and not a concrete implementation (Interface Segregation Principle).

The violation is in the getSalary method of Manager and Developer. For the base class employee what you set is what you get, not so for the others.

Let us say that we have a policy of dividing the surplus every month with equal shares to every employee. The code to set the new salary for the employees would look something like this:

    public static void divideSurplus(double surplus, List<Employee> employees) {
        if (employees != null && employees.size() > 0) {
            double share = surplus / employees.size();
            for (Employee employee : employees) {
                employee.setSalary(employee.getSalary() + share);

Yes, this is ugly mutating code but let us not be concerned with this yet.

If every employee were created as Employee this would work, that is, if version 1 of the library only had Employee, then this would have been the implementation to do the work.

When employees are created as Developer and Manager as well as Employee the code doesn’t break, but the business logic does. You end up paying more than you have made. This is an extremely ugly side effect of not adhering to LSP.

import java.util.ArrayList;
import java.util.List;

import org.arpit.javapostsforlearning.Developer;
import org.arpit.javapostsforlearning.Employee;
import org.arpit.javapostsforlearning.Manager;

public class SurplusDivision {

    public static void divideSurplus(double surplus, List<Employee> employees) {
        if (employees != null && employees.size() > 0) {
            double share = surplus / employees.size();
            for (Employee employee : employees) {
                employee.setSalary(employee.getSalary() + share);

    // cannot be used if salaries are 0
    public static void divideSurplus2(double surplus, List<Employee> employees) {
        if (employees != null && employees.size() > 0) {
            double share = surplus / totalSalaries(employees);
            for (Employee employee : employees) {
                employee.setSalary(employee.getSalary()*(1 + share));

    public static double totalSalaries(List<Employee> employees) {
        double total = 0;
        for (Employee employee : employees) {
            total += employee.getSalary();
        return total;

    public static double claculateSalary(Employee employee) {
        return employee.getSalary();

    public static void main(String[] args) {
        double revenue = 90000.0;
        List<Employee> employees = new ArrayList<>();
        employees.add(new Employee(1, "name1", 10000.0));
        employees.add(new Employee(2, "name2", 20000.0));
        employees.add(new Employee(3, "name3", 30000.0));
        double surplus = revenue - totalSalaries(employees); 

        divideSurplus(surplus, employees);
        System.out.println(totalSalaries(employees)); // prints 90000.0

        employees = new ArrayList<>();
        employees.add(new Employee (1, "name1", 10000.0));
        employees.add(new Developer(2, "name2", 20000.0));
        employees.add(new Manager  (3, "name3", 30000.0));
        surplus = revenue - totalSalaries(employees); 

        divideSurplus(surplus, employees);
        System.out.println(totalSalaries(employees)); // prints 101600.0

        divideSurplus(0, employees);
        System.out.println(totalSalaries(employees)); // prints 115226.66666666666

        // surplus 2
        employees = new ArrayList<>();
        employees.add(new Employee(1, "name1", 10000.0));
        employees.add(new Employee(2, "name2", 20000.0));
        employees.add(new Employee(3, "name3", 30000.0));
        surplus = revenue - totalSalaries(employees); 

        divideSurplus2(surplus, employees);
        System.out.println(totalSalaries(employees)); // prints 90000.0

        employees = new ArrayList<>();
        employees.add(new Employee (1, "name1", 10000.0));
        employees.add(new Developer(2, "name2", 20000.0));
        employees.add(new Manager  (3, "name3", 30000.0));
        surplus = revenue - totalSalaries(employees);

        divideSurplus2(surplus, employees);
        System.out.println(totalSalaries(employees)); // prints 102441.17647058822

        divideSurplus2(0, employees);
        System.out.println(totalSalaries(employees)); // prints 117079.41176470587



The main method should print 90000.0 in every case, but it doesn’t.

Not only has the hierarchy broken the business case – it has also made it quite impossible to create correct.

Double the money

This is a widespread mistake – having a decimal point in the string representation of the number does not make it a viable currency indicator. Please go and read What Every Computer Scientist Should Know About Floating-Point Arithmetic

It is incredible that people over and over again seem to think that infinitely many elements can be stored in a finite machine.

Public Fields are Bad

Well, there are no public fields in the code shown, they are class protected fields. While that is technically true, the fact is that getters and setters galore is basically no better. Having this bean structure or promiscuous objects makes it easy to implement in imperative style, and extremely hard to preserve the codes maintainability because you have violated the core tenet of Object Oriented Programming: Encapsulation. Read Why getter and setter methods are evil for more on this.

How often is it required to set the Employee name, Id, or salary?

Hardcoded Values

The BONUSPERCENT constant for both Manager and Developer are hardcoded, that is if one manager is allowed a different bonus percentage then it is not possible. If every developer needs a different bonus, then the code needs to be recompiled.

Unclear or Misleading names

BONUSPERCENT is in the decimal representation, that is 0.2 = 20%.

Bad Design

While I do understand the notion that we seemingly have a hierarchy because a Developer is an Employee and a Manager is an Employee, there really is no reason for this. They are only different – in the provided example – through title, which apparently isn’t a part of the object, and how their salaries are calculated. If an existing employee becomes a manager or developer, we cannot shift their roles, but must create a new instance of the matching object – something there isn’t support for in the code provided.

So if the Employee had a Role associated, then something else could calculate the salary to be paid based upon the role. Naturally this wouldn’t help with explaining code inheritance, and it probably wouldn’t help with the surplus division.

The Emperor’s New Software

juni 22nd, 2013

Many years ago there lived an Emperor. He was so fond of new clothes that he spent all his time and all his money in order to be well dressed.

Apparently, today we have a lot of ’emperors’ so fond of new software that they spend all their time and all their money in pursuit of software solutions.

Visitor arrived every day at court and one day there came two men who called themselves weavers, but they were in fact clever robbers.

They pretended that they knew how to weave cloth of the most beautiful colors and magnificent patterns. Moreover, they said, the clothes woven from this magic cloth could not be seen by anyone who was unfit for the office he held or who was very stupid.

The Emperor thought: “If I had a suit made of this magic cloth, I could find out at once what men in my kingdom are not good enough for the positions they hold, and I should be able to tell who are wise and who are foolish. This stuff must be woven for me immediately.”

And he ordered large sums of money to be given to both the weavers in order that they might begin their work at once.

The hype curve of the potential usages of the product is as clear today in software as they are in this story.

The Emperor sends his old minister to check up on the weavers’ progress. The minister can’t see any product, but will not attest to the possibility that he is unfit for his job or very stupid, thus he expresses the wonders of the cloth.

The story repeats itself with other officials all claiming to see the wonderful product. Finally the Emperor is presented with the ‘cloth’ – and he too is too proud to admit that there is nothing there.

Getting dressed up in the makebelieve clothes, the Emperor starts off on a procession throughout the fair city. Everyone passed speak wonders of the cloth until a little child says: “But he hasn’t anything on.” Resounding throughout the crowd.

I’m not saying that software developers are swindlers – far from it, though there are less than adequate developers for some tasks. What I am trying to say is that people would rather try to keep up a facade of understanding than ask questions.

As Groucho Marx said: ‘It is better to remain silent and be thought a fool, than to open your mouth and remove all doubt.’

Silence is golden – unfortunately it is the price of silence, not the reward.

Software is not incomprehensible magic. If the solution you get is nothing like the solution you wanted, then most likely there have been communication issues.

If there is no executive support, no user involvement in the process, then you – the customer – will suffer. Whether this is due to failed projects, cumbersome work processes, or brittle solutions, you are partly responsible.

While H.C. Andersen might have had other reasons for writing The Emperor’s New Clothes, my parallel is the IT-illiterate decision makers out there. I’m not saying that everyone must speak IT, I’m saying that you should know your limits, and if you don’t know stuff you have to do, you should ally yourself with someone who can bridge the gap. But you should not be any less engaged in the production.

If you order a steak, medium-rare, at a restaurant, you would complain if you get a boiled steak, a well-done or a bleu steak. And rightly so. In software it seems you would not complain, just assume that you misunderstood the term, and the production facility – the kitchen and the waiter – performed their magic par excellence.

But this is just when it doesn’t go too bad. Quite often the parallel would be you ordering lemon sole (the fish), and getting the sole of a boot with a lemon on top. Paying the restaurant for their services, leaving the establishment still hungry, and returning the next day for another order of misconceptions.

Example: USAF wasting $1 billion on failed ERP project

“The Emperor’s New Clothes’ is a short tale by Hans Christian Andersen

The Essence of Software Engineering: Applying the SEMAT Kernel

juni 4th, 2013

The book, “The Essence of Software Engineering: Applying the SEMAT Kernel”, is written by Ivar Jacobson, Pan-Wei Ng, Paul E. McMahon, Ian Spence, and Svante Lidman. Published by Addison-Wesley, January 2013, ISBN: 978-0-321-88595-1

I am a bit confused. I had the notion that this was about improving software quality through the application of a rigorous set of rules. Section 1.1 “Why is developing good software so challenging?” seems like we’re on the right track. But I find that it has not that much to do with Software, nor Engineering. It seems it is a method for applying rules and visual indications for the progress of tasks by people for people. Which means that to me at least, this book is more about general project management than anything near software or engineering.

I found the tone of the book to be rather preaching and praising of this new found holy grail, the Kernel, all praise the Kernel, apply it to anything and everything. A common phase throughout is: “How can the kernel help you.” The kernel consist of just about 57 cards, but can be extended. Seems like the marketing managers choice for gamification Yu-gi-oh!-style.

I’m all for simple and concise ways of working. I do believe that visual aids can support collaboration and communication as well as bring a quick overview, and that these are needed for projects to succeed, but they are not all which is needed.

I believe in the “as simple as possible, but no simpler.” – as Einstein put it. As well as Antoine de Saint-Exupéry’s “You have achieved perfection not when there is nothing left to add, but when there is nothing left to take away.”

Throughout the book the Kernel will simply help do everything, it’s a veritable Swiss Army knife. I don’t think I’ve seen any professional choose the Swiss Army knife above their own set of tools. While the Kernel is lightweight – at least compared to RUP – then there are alternatives, which are even more lightweight, e.g. Impact Mapping.

To me it seems that applying the Kernel kind of looks like Kanban with a “work in progress”-board. But then it also looks a bit like a concurrent waterfall, which could be due to the fact that I read RUP and UML into the stuff that Ivar writes. Both of which were praised. In my opinion wrongly so. UML is great for back of the napkin illustration of concepts, a variant worked wonders in the Design Patterns book, but UML in the latest incarnation seems overly verbose as a modelling language – under the notion that a model is a simplified abstraction of the real thing.

Perhaps Ivar is biased from electronic engineering with all their symbols and glyphs (Try looking for IEC 60617 on Google). But what he fails to realize is that those symbols are really their programming language. For software development, we have our own programming languages, and we don’t need a modelling language to go into minute details – at least not as a document. If you generate the model from the source code you can apply as much details as you want, but believing that a change is applied both to the model and to the source code is betting against the DRY principle: Don’t Repeat Yourself.

Why do have a sense of concurrent waterfall? Well, the cards follow 7 aspects, called alphas: Opportunity, Stakeholders, Requirements, Software System, Team, Work, and Way of Working. While I agree to these, and their connections noted in the graphs, e.g. Figure 2-1 on page 15 (not shown here), then there is a notion of the 5 or 6 steps, and seemingly you can only progress, e.g. from Opportunity :: Identified to Opportunity :: Solution Needed. And while that might be true for opportunities, then I don’t see why Way of Working :: Working Well will stay there until the project is done.

Some of the praise in the book is from academia, and while it is easier to teach a rigorous system, it may still not be the right thing to do – at least it hasn’t helped adding UML, RUP, etc. to the curriculum.

In the praise section, Ed Seymour notes that: “This book represents a significant milestone in the progression of software engineering.” I’m sure that any book is a milestone in its domain, I just feel that this book is a milestone along a different road going in the, not quite right, not quite wrong, direction.

Uncle Bob – one of the three to write a foreword – wrote: “After reading the book, I found myself wanting to get my hands on a deck of cards so that I could look through them and play with them.” I felt the same at the beginning of the book, but now I’m thinking more about which game to play, and how many expansion packs will be published in the future.

All in all I’m quite disappointed with the contents of the book, though I’m sure it’ll get wide adoption, and we will be off course for another 10 years. Some of the contents is true and solid, the rest – apart from the intentionally left blank pages (all 34 of them approximately 10% of the book) – seems to me to be more of an academic solution to something which is only half the problem. It is easy to prove me wrong though – apply the Kernel to 12 or more different and average teams and have them develop successful software solutions on time and on budget for projects around the $5-10 million budget. Public projects seem to fare really poorly, that would be an interesting case to follow. If more than 1 project fails, then the Kernel is not the holy grail, depending on the success rate, we could argue whether or not the method is helpful at all.

I’m more disappointed with this book than I was reading Impact Mapping, which at 86 pages is about 25% of The Essence of Software Engineering, but with more information about applying the method, which is far easier if you can remember the correct order: Why, Who, How, What.


IT-Branchens manglende eksperter

juni 1st, 2013

”Hver fjerde it-virksomhed kan ikke skaffe medarbejdere” (Pressemeddelelser, 2013)

Det er jo en trist titel, når man som it-arbejder ikke uden videre kan finde ansættelse i en it-virksomhed.

Nu er jeg sikker på, at problemet ikke blot drejer sig om, at der skal være lige så mange ledige, som der er stillinger, der skal besættes. På det plan er en it-arbejder ikke en ressource, men et selvstændigt individ med viden indenfor et specifikt område eller måske flere distinkte områder.

En af problemstillingerne er at matche, hvad virksomheden ønsker, med det de potentielle kandidater tilbyder. Det er som regel klart for fagmanden, at kunden ikke helt kan forklare, hvad de vil have, fordi de ikke har fuldt kendskab til det domæne, de ønsker noget i. På samme måde er det svært for kandidaterne at præcist profilere, hvad de kan, hvorfor det er en gevinst for firmaet, og hvad de ønsker.

Når dette så er sagt, så kunne det være interessant at vide, hvor mange ledige, der egentlig er i branchen, hvor mange ledige stillinger, der er, og hvilke områder det drejer sig om.

”Ifølge EU Kommissionen vil EU-lande frem mod 2015 komme til at mangle op mod 900.000 it-eksperter”

900.000 det er godt nok mange. Lad os kikke på nogle tal for at danne et overblik.

Iflg. It-branchen (It-branchen i tal), så var der i 2010 82.649 fultidsansatte i It-erhvervet. Kikker man derimod på Statistikbanken (RASA11: Beskæftigede (arbejdssted) efter område, branche (DB07), socio-økonomisk status, herkomst, alder og køn), så er tallet 47.704 det giver grund til forundring om, hvem der har ret, men det hører til i en helt anden diskussion.

For at ekstrapolere til resten af EU, så antager vi løseligt, at den samme procentsats af arbejdsstyrken arbejder i it-branchen.

Statistikbanken fortæller os, at der i 2010 var en arbejdsstyrke på 2,7 millioner WolframAlfa siger 2,9 mio. i 2011 (Labor Force European Union). Sammentæller man de 27 nationers tal opnås en samlet arbejdsstyrke på 245 mio.

Lidt matematik fortæller os, at it-branchens ansatte udgjorde 47.704 / 2.700.000 = 1,8% af den samlede arbejdsstyrke i Danmark i 2010.

Nu er it-branchen heldigvis i fremdrift, i hvert fald hvad angår beskæftigelse, kikker vi på Statistikbankens tal ser det ud til, at der
fra 2010 og frem er en year-on-year growth rate på 2%.

Bruger vi disse tal, så var der i 2011 ca.102% * 1,8% * 245 mio. = 4,5 mio. It-folk i EU.

Det giver os at der i 2015 – 4 Ã¥r fra 2011 – vil være omkring: 102%^4 * 4,5 mio. = 4,9 mio.

De 900.000 eksperter antages at skulle være indeholdt i disse 4,9 mio. hvilket betyder, at eksperterne udgør: 900.000 / 4.900.000 = 18% dvs. et sted mellem hver 5. og hver 6. plads mangler en ekspert i 2015.

Benytter vi It-branchens tal, 82.649, i stedet når vi frem til 11% eller hver 9. plads i EU.

Hvad er en ekspert?

Hvis der er behov for eksperter, hvad er så en ekspert, og er domænet IT måske lidt for bredt til, at man kan danne sig et godt billede af, hvor det ville kræve en indsats, og hvad den indsats bør fokusere på.

Niels Bohr

Niels Bohr har engang sagt: ”En ekspert er en person, som har begået alle de fejl, som er muligt at begå inden for et begrænset område.” (Bohr) Jeg går ud fra, at Niels Bohr mente, at personen så også havde lært af sine fejl og ikke følte trang til at forsøge disse igen. I princippet kunne man sige, at denne person havde brute force gennemprøvet samtlige kombinationer i domænet.

Givet denne definition, ja, så er der ingen it-eksperter. Måske er IT for stort et domæne, og det vokser ganske hurtigt.

10.000 timer

Malcolm Gladwell foreslår 10.000 timers reglen i sin bog Outliers (Gladwell), hvilket nogle finder en rimelig regel (Antonio, 2009), og andre ikke (Why Gladwell’s 10,000-hour rule is wrong, 2012).

Hvis vi nu benytter denne regel og kobler den med et mandÃ¥r pÃ¥ 1920 timer – lad os for nemheds skyld benytte 2000 timer/Ã¥r – sÃ¥ vil en ekspert blive skabt ved 5 Ã¥rs dedikeret arbejde pÃ¥ det aktuelle domæne.

Det betyder, at en 5-årig universitetsuddannelse ikke kan klare at undervise bredt indenfor et smalt felt, f.eks. i datalogi lære de studerende om mere generelle ting, som: Abstrakte datastrukturer, algoritmer, analyse, design, og databaser. Men derimod ville være nødt til at presse de studerende til at tage et entydigt karrierevalg allerede på første studiedag.

Desuden vil det tage underviserne 5 år at blive eksperter på et enkelt nyt område, og først derefter vil de være klar til at undervise, hvilket giver en lead time på 10 år før de første studerende bliver eksperter. Dvs. vil ville se de første iPhone eksperter i 2017, hvis underviserne startede ved lanceringen i 2007.

Da it-branchen har en noget hurtigere produktionstid af nye teknologier og koncepter, sÃ¥ lader det ikke umiddelbart til, at vi skal sætte vores lid til, at universiteterne skal uddanne eksperter – i hvert fald ikke indenfor et snævert domæne. De skal derimod danne eksperter i at lære og at tænke selvstændigt med fokus pÃ¥ forskellige generelle domæner, sÃ¥ledes at de studerende bliver rustet til livslang selvstændig læring.

Måske er det ikke denne definition på ekspert, der er tale om. Om ikke andet, så er 2015 ikke 5 år væk, men 1½, hvilket vil betyde, at man bør starte i dag og de næste 78 uger arbejde dedikeret på et emne 128 timer om ugen. Dvs. omkring 5 timers søvn alle ugens 7 dage, og stort set resten af tiden fokuseret på emnet.

Andre definitioner

Wikipedia foreslår flere muligheder i forbindelse med ekspert.

Klog af erfaring

Denne minder meget om Niels Bohrs version, dog behøver man ikke at have begået alle fejl, men det betyder så omvendt, at hvis man blot har arbejdet med IT og gjort sig nogle erfaringer, så er man ekspert. Jeg mener ikke, at jeg bliver skatteekspert ved at have gjort erfaringer med min selvangivelse, så umiddelbart virker denne definition meget vag.

Selv om den ligner Niels Bohrs definition, så er resultatet helt modsat, nu er alle eksperter.


En specialist kan løse opgaver, en ekspert kender til løste opgaver, en lægmand kender ingen løsninger, og en tekniker kender til nogle løsninger.

Umiddelbart finder jeg at vi så alle tilhører alle 4 grupper på én gang, når IT er domænet. Det at man kan løse en opgave i bedste Storm P. stil gør jo ikke en til en specialist, og slet ikke indenfor en anden gren af IT.

Arthur Mellen Wellington

Arthur Mellen Wellington har udtrykt noget i stil med: ”Kunsten at gøre godt med en dollar, hvad enhver klodrian bruger 2 dollar på tilnærmelsesvist at lave.” (Wellington) Det drejede sig dog om ingeniørfaget.

Umiddelbart ser en sådan definition jo god ud for forretningen, men den er nok ganske svær at måle, og givet erfaringen fra det virkelige liv, så lader det til at vi mangler mange eksperter og har manglet dem de sidste 20 år.


Jeg spurgte en 13-årig dreng: ”Hvad er en ekspert?” Til hvilket han svarede: ”Det er en, der ved alt om noget, kender alle reglerne, og kan alt (med den ting).” Hvilket måske er den bedste definition, jeg har hørt.

Relativ ekspertise

Egentlig kan alle disse forklaringer på, hvad en ekspert er, være lige gode, hvis man antager at det er relative termer frem for en universel uddybende term. Altså at man kan påtage sig ekspert titlen, hvis man blot ved alt det, de andre i nærheden ved, og lidt mere til. Givet Google og andre søgemaskiner, så er ”i nærheden” efterhånden blevet meget bredt, men nu er det jo ikke alt på nettet, der er lige sandt.


Det kan være ganske fint at filosofere over, hvad en ekspert er, og hvordan det relaterer sig til hele it-branchen. Men på den teknologiske forkant, hvordan ser tingene så ud, hvad er det, der formentlig kommer til at ske i den nærmeste fremtid, som vi skal være opmærksomme på.

McKinsey & Company har for nyligt fremlagt Ten IT-enabled business trends for the decade ahead (Ten IT-enabled business trends for the decade ahead, 2013) som omhandler:

  1. Joining the social matrix
  2. Competing with ‘Big Data’ and advanced analytics
  3. Deploying the Internet of All Things
  4. Offering anything as a service
  5. Automating knowledge work
  6. Engaging the next 3 billion digital citizens
  7. Charting experiences where digital meets physical
  8. ‘Freeing’ your business model through Internet-inspired personalization and simplification
  9. Buying and selling as digital commerce leaps ahead
  10. Transforming government, health care, and education

Gartners forudsigelser ligger pudsigt nok ganske nært (Top 10 Strategic Technology Trends for 2013):

  1. Mobile Devices Battles
  2. Mobile Apps and HTML5
  3. Personal Cloud
  4. The Internet of Things
  5. Hybrid IT and Cloud Computing
  6. Strategic Big Data
  7. Actionable Analytics
  8. Mainstream in-memory computing
  9. Integrated Ecosystems
  10. Enterprise app stores

De stillingsopslag, jeg har set, ligger ikke særlig tæt på bare en enkelt af disse fremtidstrends. Det kan jo betyde, at disse stillinger allerede er besat af eksperter og specialister. Mit gæt er dog, at vi stadig er forankret i 2001-mindsettet, hvor det hele drejer sig om den enkelte forretning, og at kunder bare skal acceptere one-size fits all samt integrationsmulighederne/mash-ups er så godt som ikke eksisterende.

Set i det lys, så er de 900.000 manglende eksperter sikkert et meget godt bud, altså underforstået at op mod 20% af it-branchen vil i 2015 være beskæftiget indenfor områder, de ikke før har berørt, herunder de 10 McKinsey visioner, men om de er Bohr-eksperter eller blot relative eksperter, det vil fremtiden vise.

NÃ¥r jeg ser pÃ¥, hvilke produkter vi, i it-branchen, leverer i dag, sÃ¥ kunne det se ud til at vi mangler mere end 900.000 eksperter allerede i dag. Se evt. Rigsrevisionens beretning om POLSAG (Beretning til Statsrevisorerne om politiets it-system POLSAG, 2013) eller Bonnerup rapporten (Erfaringer fra statslige IT-projekter – hvordan gør man det bedre?, 2001) og 10 Ã¥rs indsigten (Bonnerup-rapportens ophavsmænd 10 Ã¥r efter: Ingen er blevet klogere, 2011)

De personer, der i dag formentlig har mest kontakt med kommunerne, de ældre og de svage, bliver til næste år ramt af Lov om digital post (Lov om digital post ). Det er rigtigt, at de kan fritages, men skal vi ikke allerede nu gætte på, at det kræver en digital ansøgning? Vi bliver formentlig klogere, når virksomhederne fra 1. september i år kommer på den løsning.


Når man, som jeg, er ledig, og har indblik i de ovenstående ting, så synes jeg det er trist at se en udmelding om at hver fjerde it-virksomhed ikke kan skaffe kvalificeret arbejdskraft. Gad vide, hvor mange it-eksperter, der er, som ikke kan finde kvalificeret ansættelse.

Den nærmeste fremtid skulle gerne bringe en enorm omvæltning, hvor hverken COBOL, C#, Java, SOAP, XML, RUP, etc. er de dominerende termer indenfor seriøse brugervendte løsninger. Hvor Apache og IIS forsvinder fra markedet, og NGINX og lignende 10k webservere bliver betragtet som de sløve men bredt accepterede. En fremtid, hvor man arbejder med den anvendte teknologi frem for imod den, dvs. man ikke benytter PDF og Word dokumenter som web kommunikationsmedie. En verden, hvor der står Gb på datatransmissionshastighederne og ikke Mb.

Betragter jeg så mig selv som ekspert? Nej, jeg vil hellere forvanske den engelske talemåde ’Jack of all trades, master of none’ til ’Jack of many trades, master of some’ og så evigt lærende. Men jeg ved, at i nogle lægmænds øjne betragtes jeg som ekspert i IT. Det fortæller mig blot, at vi stiller store krav til resten af befolkningen, når vi siger, at de alle skal være web kyndige i morgen.

Til trods for det, så går jeg ud fra, at jeg stadig er anvendelig i it-branchen, hvor mine styrker kan anvendes sammen med andres til at lave noget større, end vi havde formået alene.


Erfaringer fra statslige IT-projekter – hvordan gør man det bedre? (03 2001). Hentede 31. 05 2013 fra tekno:

Bonnerup-rapportens ophavsmænd 10 år efter: Ingen er blevet klogere. (08. 12 2011). Hentede 31. 05 2013 fra Version2:

Why Gladwell’s 10,000-hour rule is wrong.(14. 11 2012). Hentede 31. 05 2013 fra BBC Future:

Beretning til Statsrevisorerne om politiets it-system POLSAG. (03 2013). Hentede 31. 05 2013 fra rigsrevisionen:

IT-Branchens Pressemeddelelser. (23. 05 2013). Hentede 31. 05 2013 fra IT-Branchen:

Ten IT-enabled business trends for the decade ahead. (05 2013). Hentede 31. 05 2013 fra McKinsey & Company:

Antonio, V. (2009). Expert Level – The 10,000 Hour Rule. Hentede 31. 05 2013 fra Victor Antonio:

Bohr, N. (u.d.). Hentede 31. 05 2013 fra

Ekspert. (u.d.). Hentede 31. 05 2013 fra Wikipedia:

Expert Contrasts and comparisons. (u.d.). Hentede 31. 05 2013 fra Wikipedia:

Gladwell, M. (u.d.). The 10,000 Hour Rule. Hentede 31. 05 2013 fra

It-branchen i tal. (u.d.). Hentede 31. 05 2013 fra

Labor Force European Union. (u.d.). Hentede 31. 05 2013 fra WolframAlfa:

Lov om digital post . (u.d.). Hentede 31. 05 2013 fra

RASA11: Beskæftigede (arbejdssted) efter område, branche (DB07), socio-økonomisk status, herkomst, alder og køn . (u.d.). Hentede 31. 05 2013 fra

Top 10 Strategic Technology Trends for 2013. (u.d.). Hentede 31. 05 2013 fra Gartner:

Wellington, A. M. (u.d.). Arthur Mellen Wellington. Hentede 31. 05 2013 fra Poem hunter:

OOP is dead and alive

maj 23rd, 2013

Discussing Programming topics with non-programmers

The other day I was in good company with a business owner, who is also a programmer, and a business controller, who isn’t a programmer. We were discussing the issues of one of my favorite topics: Software Quality.

Naturally the software must do what it is intended to do, but the hidden issue, which to me is almost as important: Functionality must be placed in the right areas.

The analogy became looking for $10 in a persons wallet.

To the programmer getting this task, the basic work: Find person, find persons wallet, check if wallet contains $10. Has to be done regardless of where the functionality is applied. To the CPU the steps will be loaded in sequence anyway thus the “correct” position of the code is another matter.

Now, in real life, if I ask someone if they have $10 in their wallet, they are able to check their own wallet and inform me. On the other hand, I could just take their wallet and check myself. Naturally this would be a violation of privacy – and I’d have to know where they keep their wallets.

In the same sense, a controller could implement the functionality required leading to promiscuous objects being violated. Or the objects themselves could have the functionality implemented. The first leads to violation of Law of Demeter, tight coupling, and too much knowledge, which in turn leads to higher risk of introducing bugs, higher maintenance cost, intricate dependencies, and a big ball of mud.

Controller {


Controller {
Person {
 hasAmount(Integer amount){
  return myWallet.hasAmount(amount)

The example is a bit far fetched, but it served the purpose of why it is important to have clean code in more than one sense.

The first snippet the Controller will have to know of Person and Wallet. In the latter Controller needs to know of Person, and Person needs to know of Wallet. Even though there are the same amount of dependencies, the context for the Controller is much higher in the first snippet.

OOP is dead

A while ago Pinterest suggested I read Object Oriented Programming is Dead which is a bit dated, nevertheless it is still relevant. I just think that there are more reasons why OOP is dead – and yet still alive.

First off, we killed OOP by trying to fit a relational database as the persistence layer – this leads to data transfer objects, which are mostly grouped global variables or Java beans, which has nothing to do with encapsulation.

Second, we killed OOP by placing logic in the wrong classes, classes in the wrong hierarchy, and generally forgetting what OOP is about.

Third blow, we apparently insist on imperative styled programming for an OOP, which leads to the issues described above.

Fourth stab, we seem to be grounded in the snapshot state of databases. That is, an object in a database has a single state, without any prior history. This is similar to register loading, and overwriting, and is prevalent in the Update keyword. You actually have to twist, turn, and contort the default behavior of an RDBMS to get a history/audit trail for the values.

The final death blow was delivered by Martin Odersky – who also kindly revived OOP in junction with FP – in his presentation Object and functions, conflict without a cause. Well he has done so on other occasions touting the Scala horn.

Rich Hickey – the Clojure guy – seems to at least backup the FP and OOP notion – primarily of stateless objects, and Datomic seems like a brilliant choice for a persistence layer.

OOP is alive

The Object Oriented notion is extremely important as it is what should drive SOA services. They should be defined by their interfaces and encapsulate data and implementations. As Steve Yegge mentioned Amazon is quite good at, and Google not quite as good.

I believe SOA is the right level of re-usability of software, and we will have to get much better at it as more and more mash-ups are wanted. That means we have to accept interoperability at a different level, keep disciplined and not query database tables which we know of, but really belongs to another service.

We also have to embrace the functional style – I’m talking REST for web developers – in which objects are immutable/stateless, and transformations can be predictably run whether in parallel or sequence, synchronous or asynchronous to the client.

Not even mediocre

maj 7th, 2013

This is a follow-up to Trust has failed – time for control and assurance

I’ve been annoyed by the findings that we – as software developers – have at best an average of 68% success rate, and with approximately a third of these in need of costly repairs, which in my book constitutes a success rate of only 46%. We are not even mediocre at best, and we haven’t moved away from Frederick P. Brooks’ “Plan to throw one away” (Chapter 11 of The Mythical Man-Month, ISBN: 0201835959) Even though he argues against it in the 20th anniversary edition from 1995 – almost 20 years ago. That is – it seems we haven’t really learned anything for the past 40 years of software development.

I know that there are other professions in which the products are declared as successes while they soon after display less than adequate abilities. Vasa, Titanic, Columbus reaching the west passage to India, Apollo 1 and Soyuz 1 & 11, etc.

Let us assume that the complexity of the requested software solutions are normal distributed. Then 68% is 2s in the Six Sigma terminology, or zero nines in the engineering Nines terminology. The only natural division of things into 2/3 and 1/3 which comes to mind is the awake and sleep division of the day.

The complexity of a software solution is quite hard to measure as it involves the actual solution as well as the work done in order to get to the final solution, and that in turn involves the people associated with the project over its entire scope: Developers, testers, architects, project managers, customers, etc. For a parallel to the world of mechanics we have the Invention of Heavier-than-air Aircraft, in which the Wright brothers won over the better funded and equipped team of Samuel Pierpont Langley.

I’m not sure whether Langleys team was Punished by Rewards or they just over-engineered the task at hand. Fact is, they didn’t deliver before the Wright brothers.

I know – usually we don’t have a race to be first, we have a contract to fulfill instead. Perhaps that is why there are so many failures in the business. It is work and not (serious) play. Perhaps it is the ever increasing measures of confinement. Normally when I’m using someones services, we have a general acceptance of What we are discussing, I know Why I want or need whatever service it is, but I’m not telling the service provider How they must go about their business.

Take travelling for instance – I pay someone to transport me and my luggage to some specified destination, but I don’t tell them How they should do it, which route to take or in any other way meddle with the service, they are providing. A cab driver might ask which way to take, e.g. choosing between fast and cheap.

Conversely for software projects, there are notions of “must be the same as the previous”, “must be XML”, and “must have a response time below 100 ms” – while these are demands on How things must be done, they could very well – and in some cases have – impaired the end product. You end up with something as ugly as trying to use Java with twitter4j to store Tweets in MongoDB. Tweets come in as JSON object, MongoDB stores “rows” of JSON. On paper JSON objects comes in and should be filtered and piped to the database. But twitter4j reads JSON, makes it into standard Java objects, which then have to be constructed back into JSON using a convoluted builder. Adding Java makes the simple solution a lot more complicated.

But I digress.

If we on average are 68% successful, then the software around us are the result of those successes. I’m sorry, but I’m pretty sure that my definition of a success is not quite compatible with the apparent standard. Clunky interfaces and useless messages are one thing, but not being able to see your online bank account around payday due to too much stress on the machines. Not being able to log in to a “secure” facility because Java is not the right version – Java has had so many security breaches that I’m puzzled why it is in use for “security”. For extra fun, try doing this using Firefox on a 64 bit Windows box – there are seemingly no end to the hiccups Firefox will have, and will have to be killed by the process manager. Having to click on a message sent to you – notified by sending you an e-mail, that you have a new message – redirecting to downloading the message as a PDF, then having to open the PDF in another viewer, while maintaining the notion in the interface that you have unread messages. Windows not being able to shut down because it cannot play the logoff sound – it’s apparently extremely important that this sound is played.

I am pretty sure we could do better than this – and as these bugs/annoyances have existed for a long time, I have to believe that these are in the 46% of the successful software which doesn’t have to be remedied.

On the graph we have the normal distribution, the red shaded area is the 46%, the pink shaded area is the next 22% – the successes which have to be mended. In total, the entire shaded area is 68% – our success rate.

If the x-axis reflects the complexity of projects, then we’re struggling with the mediocre, which is quite puzzling as that would probably be the projects we get most of – and then we should have learned from the previous attempts, and thus be better at handling – but then developers aren’t the only part of the solution.

I am certain that with higher discipline on both sides of the table during project development we can consolidate 68% and even jump the next 22% to the first nine: 90% – it could be that 93% is a possibility within a few years, but let’s settle for the first milestone.

The next graph shows shaded areas for 68% (red), 90% (blue), and 93% (green).

Going to 90% means +1.3s and 93% means +1.5s which is approximately one standard deviation above the 68% mark.

I have assembled these points on the accumulated frequency graph. 46% (purple), 68% (red), 90% (blue), 93% (green)

As can be seen, then the step from 46% to 68% is approximately the same as the 68% to 90% – these should be low hanging fruits, ripe for the picking.

If we can pull this off – and quite frankly we simply have to – this would mean that we would have a lot more successes, which should bring greater happiness within the working environment. It could bring more work as the investment is more secure there could be a higher number of companies trying to implement new projects, which at the current rates would be deemed too risky.

The graph shows the number of tries you have to make to ensure a probability of success above 90% for the three success rates: 1/2, 2/3, and 9/10 – currently we are about the 1/2, if we estimate the successes in need of mending as not quite a success, 2/3 is approximately 68%. Thus to probabilistic ensure at least 98% success rate, our customers are expected to be willing to invest 2x, 4x, and 6x the estimated cost of a project depending upon the quality we – as a business – on average can provide.

More positive work – it shouldn’t mean more death marches, nor longer hours – quite the opposite. More projects delivered to the satisfaction of the customers, on time, on budget, hopefully working better than hoped for.

If on the other hand we can’t pull this off by discipline, better contracts, and better cooperation for all involved, then we simply have o cut down on the complexity of the projects. I know that whether you fail or succeed, you still earn the money along the way – but that is not the way I want to live and work. Return customers – the happy ones – are amazingly better customers, and they are replenishable resources, simply by the fact that they return and thus is not depleted.

If we build services that our customers will benefit from, then there should be no other hindrances to mutual benefit.

While this seems like a nobrainer, a win-win situation, why aren’t we already there? What can we do?

Unfortunately I’m not sure, but to start somewhere, I think we can start at the software development process. I’ve been reading the Danish Quality Model (DDKM in Danish) for healthcare. They have done a splendid job in providing reasons Why things must be done, and What the things should encompass, Who is responsible, but not How – this is left for each Hospital to decide. I really like that approach. The hospitals will require renewal of their certificates at least every 3 years. Furthermore, I read the Bonnerup report (in Danish) “Experiences from governmental IT projects – how to do it better?” ISBN:8790221567 from March 2001 – and the article (Still in Danish) “Authors of the Bonnerup-report 10 years later: None the wiser.”

Usually customers don’t quite know what they want until they see it – sometime though, they know exactly what they want, but have a hard time telling developers what it is. Communication is essential, and short iterations of continuous improvement, i.e. Plan-Do-Check-Act with the customer at hand seems essential. We know that getting everything right in the first try is almost impossible. I don’t think any golf player expects to play a perfect round.

The customer representative must be available, knowledgeable, and able to make decisions in all aspects. This will make the communication fluent as opposed to a lot of back and forth, who said what, and similar issues. If there is a reason to change a color, the customer representative should be able to make the call as opposed to having 5 meetings and a committee approve.

Decision makers must be invested in the project. Outside consultants who will earn money regardless of the progress of the project should be discouraged as decision makers.

Teams – on both sides – should be as small as possible to improve collaboration, communication, and understanding of the aspects in the project.

Project management must improve – staffing, estimates, communication, user participation, and post delivery follow up. It is important to learn from previous and others mistakes. At times you feel part of a Monty Python sketch sometimes the Architect Sketch, sometimes the “Meeting to take action” part of Life of Brian. Boehm and Turner discusses Alistair Cockburns notion of competence levels in the book “Balancing Agility and Discipline” ISBN:0321186125. A lawyer with specialty in IT projects mentioned reading meeting minutes from failed projects stating that “The project is delayed” as the sole content. Not why, not how long, not which actions are being taken to counter this. Later on the minutes would read “The project is greatly delayed.”

Developers must work on a single specifically defined atomic task. This makes it easier to describe why something is added to the solution. A task can touch upon several files, but doesn’t have to.

Developers must use version control. Each atomic task is committed to version control – preferably with the task description and Why it was added. This allows a nice readable history of the project, and an audit of the project at any point in time.

Developers must test their code. Not only to ensure correct behavior of the expected flow  and of the negated behavior, but – and this is more important – to guard against future changes. Tests for code makes it possible to freely try alternatives, and if you are stress testing, then it makes it possible to evaluate different configurations.

Have a vision, set goals. It is important for everyone to know which direction the project is heading in, and to know some of the milestones along the way. Accept that even the best laid out plans will fail – they do in sports, so why should we expect anything else from corporate business?

Third party competent people should audit the source code according to these rules and the project description, making it possible to give an adequate description of the projects health – much like an accountant should be able to read a companys ledger and estimate the financial soundness of the company. Third party because we need independent observers to be objective about the project.

If we cannot improve, then it must be imposed that the Minimum Viable Product becomes the Maximum accepted proposal for future projects. 

Should we strive for an accrediting institution certifying software companies on a yearly basis? I really hope not.



TechRepublic had a blog entry IT projects: Why you need to fail more often, and perhaps this is actually one of the reasons why we don’t do any better: We keep on beating a dead horse as opposed to cutting the losses early and learn from mistakes made, both our own, but also those of our colleagues in the business.

I know, it is hard to keep a business running if you terminate projects early due to infeasibility. But no matter how far you go down a wrong road, going further or faster will not get you back on track.

Why Why is more important than What

april 25th, 2013

When trying to understand a new concept the important thing to understand is not what the concept is, but why it exists. Thereby getting to the essence of the thing in itself.

This is probably why the 5 Whys is an important tool for root cause analysis and incident investigation albeit it doesn’t fit all purposes. But if it is a sequence of burrowing down to the core of an issue, then it is probably one of the better methods of examining unknown processes.

As in the story about the newlywed couple. One evening, the husband noticed than when his wife began to prepare a roast beef for dinner she cut off both ends of the meat before placing it in the roasting pan. He asked her why she did that. “I don’t know,” she said. “That’s the way my mother always did it.” The next time they went to the home of the wife’s parents, he told his mother-in-law about the roast beef and asked her why she cut off the ends of the meat. “Well, that’s the way my mother always did it” was her reply.

He decided that he had to get to the bottom of this mystery. So when he went with his wife to visit her grandparents, he talked to his grandmother-in-law. He said, “Your daughter and granddaughter both cut off the ends of the meat when they fix roast beef and they say, ‘That’s the way my mother always did it.’ How about you? Why do you cut the meat in this way?” Without hesitation the grandmother replied, “Oh, that’s because my roaster was too small and the only way I could get the meat to fit in it was to cut off the ends.” (I’ve heard it before, but the only text I could find was from The Everlasting Tradition on Google Books)

If you don’t know the root cause you may end up doing unnecessary work at best, but most likely limiting, and in worst case counterproductive and wasteful work.

Don’t ask people what they want or do, but why they want or do it. It’s just as Henry Ford said: “If I had asked people what they wanted, they would have said faster horses.” They would have asked for faster horses, because horses was something they knew about, and faster or stronger would make transportation better.

In the same vein, it is just as important to learn the reason behind, when embarking on a new project with unknown entities. In particular when starting on new software project, and especially for project managers on both sides of the table. You need to know what to deliver to be able to deliver it in the first place, you can’t tell a developer what you need, if you don’t know what it is, and you cannot accept or test the thing if you don’t know how it should behave.

If a feature has to be cut it is paramount that you can argue why that doesn’t impair the end product too much.

If a feature can be implemented in multiple ways, then the simpler should be opted for. If you don’t know the essence of the feature, you don’t know the feasible ways, and you may choose a too simple solution – these are the solutions which seems to almost work.

Going back to Ford’s quote, it is important that you know what to abstract and how to abstract it, e.g. “faster horses” to “faster means of transportation” and not “faster animals” – that would lead to trying to hitch a cheetah or a bear to a buggy.

As the character Forrest Gump is accustomed to say: “Stupid is as stupid does.” – if we don’t know better, then we do stupid things. If you know why you do things, you may have a chance not to act stupid.

When knowing why as opposed to just what, then you are closer to the Ha step of Shu Ha Ri, because you already know the mechanics, and you are armed with the path. You may not know which quantum leaps you have to make to diverge to another stable level, but at least you know whether a path is perpendicular to the current flow or perhaps an ever so slightly diverging path.

On a much more pragmatic level, it is better to know why a certain color or method is chosen, especially when the time to change it comes around. Which is why the “why” is a much better comment for source code than the “what” – which should be evident by the code itself. And if you have complete memory of the history of changes, you can check if we’re going in circles.