Wednesday, July 15, 2015

Why testing is good and TDD evil

In our lives nowadays we have safety nets everywhere. We are scared and live in a world of scaremongering. Many people are even scared to go outside. They just sit in front of their computers afraid of failure. What if I'm not seen in the best light? What if my application fails? What if someone sees my code's not perfect?

But the problems are not failures, errors and epic fails - the problem is our inability or unwillingness to accept they're here to stay. It's about how we respond to failures not about how we make sure there are no failures. It's about us accepting failures are inevitable then spending time writing tests for every possible scenario so we have excuses.

Let's be brave - hope for the best and be prepared for the worst. Roll up our sleeves take a leap of faith and change that bit of "untested" code. If there are problems let's fix them. Have a roll back solution at hand so you could revert back if your code change breaks things up.
TDD is an epic failure. And rightly so. It instils false sense of security through passing tests and creates culture of excuses through so called "test coverage" and "code metrics" - and we all know how bosses like metrics, don't they.

Let me repeat again - TDD is not cure. You can find bugs in every software. TDD was meant to eradicate bugs but the bugs are still around. TDD actually makes it harder to tackle bugs because all your passing tests lead developers to believe that the bugs are actually caused by infrastructure, 3rd party code and so on. You can always come around and say "but all my tests are passing so it's not my problem".

And let's be clear here. Tests are code. The same code we try to tests. So why your code may be buggy and your tests are always right? And what does it mean 100% test coverage? Is the code really tested to work correctly (whatever that means) with all possible input parameters and changing condition? To give you an example have a look it this Java code:

public class Adder {
    int addOne(int input) {
        return input++;
    }
}

public class AdderTest {
    @Test
    public void addOneWorksWithNumber1() {
        int result = Adder.addOne(1);
        assertEquals(2, result);
    }

    @Test
    public void addOneWorksWithNumber0() {
        int result = Adder.addOne(0);
        assertEquals(1, result);
    }
}

Is this code thoroughly tested? It appears so. Test coverage is 100%, tests passing but what value the test gives you? To answer that lets add this test:

@Test
public void addOneWorksWithVeryLargeNumber() {
    int result = Adder.addOne(0xFFFFFFFF);
    assertEquals(0x100000000L, result);
}

It's not passing! And you could probably test other edge cases. In short, we tests ONE line of code, we've already written 30 lines code to test it but we're still not 100% sure the code does what we expect it to do. So the tests are just a smoke screen. The code can still fail, it's not tested for null values, for multithreading, for runtime exceptions and so on. You could spend years of writing tests and the code could be still buggy. Remember one thing - only the code you don't write can't fail. The less code you write, the better.

Other aspect of TDD is rigidness of the solution. Any code change, and changes are the only thing that doesn't change, usually results in either retiring some tests or worse amending them. So developers try to resist changes more. They know that they will have to not only change the code, but also God knows how many tests directly or indirectly related to the code they're about to change.

TDD also leads to the new blame culture. You commit and now tests are broken. And your name is dragged around like if you were worse than Usama Bin Laden. Some organizations go to that lengths as to punish (or promote) developers according to build failures. So it pays off to sit in the corner and touch nothing.

So instead of fighting this futile battle let's just accept we can't win here by simply planning our defenses. This is akin to playing a game of chess  by simply laying out the best possible defense scenario. No matter what, you can't win this way. You have to go out to the battle and prove you can win.

I mean don't try to avoid the battle. Do not try to defer the battle by writing super defensive code. It only slows you down, distracts you from the real target. Focus on writing maintainable code, focus on quick release cycles (ideally be able to only minutes between committing you bug fix and this bug fix being released to test and prod environment), focus on clean code, use defensive programming for mission critical modules and so on. Because what really matters in the end of the day isn't bug free code but your ability to quickly fix bugs for only one thing is sure in this world - your code is buggy!

To avoid any confusion - I'm not against TDD per se (and definitely not against testing in general). It makes sense to have tests making sure core parts of your functionality (usually something related to finance calculations) are covered by tests. It also makes sense to run automated sanity testing after your CI build.

Addendum: This has nothing to do with test frameworks. They are actually quite good when it comes to ad hoc testing and verifying some functionality both in isolation (unit) and as part of a bigger system (integration).

It also makes sense to have plentiful of tests if you're lucky enough to work on a framework, library or something which is intended to be highly re-useable.

Friday, May 29, 2015

Of corporate software development cycle

This piece is based on my experience working for big corporations but somehow, surprisingly applies to smaller establishments too.

Now for a while imagine a company. This reputable company with established business model has a portfolio of applications to support that business. From of-the-shelf Microsoft Exchange server and file sharing to customized boxed products to bespoken applications. Now this company has a system, let's system A, that helps with the core od the company business. Be it selling stuff, taking orders, managing customers. The system A is pretty hairy one - has been around for a while. As the company's employees come and go the company finds itself in situation where the core developers (and subject matter experts) have left the company but the company still needs to maintain the system.

First, they try to understand the system. But it's hard. Lots of these systems are a) quite complex b) poorly (if at all) documented c) written long time ago hence using languages and frameworks no longer used. All in all the most sensible approach is to actually rebuild the system from ground up.

To do so, the company hires managers. First, managers come up with business case. Sometimes they skip this step, but that's unlikely. Once business case is approved, they come up with things like governance, procurement et cetera. Obviously they also introduce planning and estimation. In most cases, no developer is involved. And all the reported indicators show green. One of the first steps in the process of superseding the system A with system B is to call in a consultancy company. This company quickly rubber stamps the need for replacing system A highlighting many shortcomings of it. Based on their findings a PowerPoint presentation is made and shown to the whole company (department) that clearly shows incompetence and ill-thinking of the authors of the system A. Many times the presentation contains analogies to old cars, houses, broken tools - you name it, so it's absolutely clear that system A is beyond repair and the authors should be punished. After the system is has been successfully humiliated it's time to build the new system B.

It all starts with requirement gathering. It's complicated. SMEs (subject matter experts) are hard to get hold of, and if you can get hold of them they sometimes don't remember everything, sometimes even contradicting to each other. But you do the best to figure out what's needed to do. After this painful exercise you move on to the development phase. All indicators are still green and everything goes according to the plan.

You usually pick the best developers with some experience with the system A. Any work on system A is put on hold in anticipation of the system B. System B will be coming soon so it makes no sense to improve it or dilute the effort. 3 months down the line and indicators go to amber. No matter what technology, the experience of your developers, there's always challenges, changes, unexpected complications. Your system B is much more complicated requiring new or hardly tested technologies. To add insult to injury users of the system A are really nervous and want their new ideas to be implemented into it. It's getting more and more difficult to explain the benefits of system B and why it is worthwhile to wait for it. So finally some developers or hired/diverted to support system A. That further slows downs progress of the development of the system B. First, you lost some hands, second you made the development of system B a moving target. As new features are added to system A you need to add them to system B as well.

At this stage many managers get nervous and start to go for compromises. No more TDD, no more free weekends, no more this, no more that. That's the disillusion stage. The hardest one. Also the breaking point. Decision made during this stage shape the delivery/not delivery of system B. It's up to the product managers, organization, processes and other factors if the system B is going to be delivered or shelved and people participating blamed.

There's a few things you should keep in mind when replacing some older core company system.

1) Even during the reimplementation phase it's the system you're replacing what pays the bill - so be considerate.
2) Never stop working on the system you're replacing. The replacement system may never be replacing the original one and you don't want to lose out.
3) Appreciate that the the replacement system will lack all the features and complexities the original system has.
4) Avoid any big bang releases. Shout when you here this "System B goes live on Saturday and we decommission system B the day after that". That smells.
5) It usually helps if you replace one aspect of the legacy system at a time. Just to prove the system can be replaced in its entirety.
6) Give user quick demo as you're building the replacement system every now and then. To keep the users in loop and engage. You going the need the users' support to get you through the teething problems phase.
7) Make sure there's sound reasons for replacing the legacy system and all other options have been exhausted. "Java is too verbose", "it doesn't support HTML5", "it's not an SPA" are hardly considered as sound reasons.
8) Before starting to implement the new system make sure it's the user who actually want it. Make sure it's them who make the IT team to write it.
9) Virtually all medium to large size corporations have their own bespoken or heavily customized systems. Be very suspicious when the rain sellers start to appear to sell you the of-the-shelf solution that needs to be tweaked a bit and will do exactly what you need.
10) I'm sure there's thousands and thousands more things you should be aware of.

Wednesday, April 29, 2015

My problem with conferences

We all know the numbers - 10% of developers are active in the community whilst 90% is the programming dark matter. To have two sides of barricade is quite all right - actually we all need each other as much as we need the audience - the users of our products.

Now of the 10% of developers you have the rock stars - the talk givers. It is perceived as the highest honour to go to a conference to give a talk. As a talk giver you waltz in to a room full of the usual business developers (means configurators or integrators) and start talking about stuff they've not worked with. Say the talk giver starts explaining, in great details, the technicalities of tail recursion. Or the goodness of the new shinny feature that allows you to curry functions from right (i.e. right curry). And the audience is in awe. They can't believe their eyes - all these features. We have to use them. NOW!

Then they return to their cubicles. They launch their IDEs (see, they use IDEs unlike the talk givers happy to use 6 ... VI) and try to use tail recursion. But they couldn't. Why? Because the problem domain they're in has no use for tail recursion. They rarely use any recursion and if they do then the recursion is so complicated that you simply can't use the tail recursion.

And that's the problem with the rock star talk givers. They want to impress. They don't care about your daily chores, they're rock stars. They are the rock stars even when their presentation is a shitty Powerpoint with a stolen background, they're the rock stars when they can't really explain what the brilliant feature is good for, they're the rock stars even if their personal web page (if they have one) is a poorly knitted html4 full of marquee tags, they're the rock stars even if they have only 5 years of experience under their belts and their github is full of presentations but no code at all.

So when you go to a conference choose wisely - unlike the talk givers you don't get paid for attending, you actually have to pay the admission and find some spare time. Pick the talks you want to go to based on what you can really use in your daily life not the buzzwords. Don't be afraid to leave if the talk is of no use to you. Give the organizers a feedback and vote with your feet.

Or if you've been to many conferences try unconferences. Go to meetup groups. Smaller venues, direct contact with other developers.

disclaimer: not all conferences feature rock star talk givers. And some are happy to listen to this sweet talk just to have something to dream about.