In our lives nowadays we have safety nets everywhere. We are scared and live in a world of scaremongering. Many people are even scared to go outside. They just sit in front of their computers afraid of failure. What if I'm not seen in the best light? What if my application fails? What if someone sees my code's not perfect?
But the problems are not failures, errors and epic fails - the problem is our inability or unwillingness to accept they're here to stay. It's about how we respond to failures not about how we make sure there are no failures. It's about us accepting failures are inevitable then spending time writing tests for every possible scenario so we have excuses.
Let's be brave - hope for the best and be prepared for the worst. Roll up our sleeves take a leap of faith and change that bit of "untested" code. If there are problems let's fix them. Have a roll back solution at hand so you could revert back if your code change breaks things up.
TDD is an epic failure. And rightly so. It instils false sense of security through passing tests and creates culture of excuses through so called "test coverage" and "code metrics" - and we all know how bosses like metrics, don't they.
Let me repeat again - TDD is not cure. You can find bugs in every software. TDD was meant to eradicate bugs but the bugs are still around. TDD actually makes it harder to tackle bugs because all your passing tests lead developers to believe that the bugs are actually caused by infrastructure, 3rd party code and so on. You can always come around and say "but all my tests are passing so it's not my problem".
And let's be clear here. Tests are code. The same code we try to tests. So why your code may be buggy and your tests are always right? And what does it mean 100% test coverage? Is the code really tested to work correctly (whatever that means) with all possible input parameters and changing condition? To give you an example have a look it this Java code:
public class Adder {
int addOne(int input) {
return input++;
}
}
public class AdderTest {
@Test
public void addOneWorksWithNumber1() {
int result = Adder.addOne(1);
assertEquals(2, result);
}
@Test
public void addOneWorksWithNumber0() {
int result = Adder.addOne(0);
assertEquals(1, result);
}
}
Is this code thoroughly tested? It appears so. Test coverage is 100%, tests passing but what value the test gives you? To answer that lets add this test:
@Test
public void addOneWorksWithVeryLargeNumber() {
int result = Adder.addOne(0xFFFFFFFF);
assertEquals(0x100000000L, result);
}
It's not passing! And you could probably test other edge cases. In short, we tests ONE line of code, we've already written 30 lines code to test it but we're still not 100% sure the code does what we expect it to do. So the tests are just a smoke screen. The code can still fail, it's not tested for null values, for multithreading, for runtime exceptions and so on. You could spend years of writing tests and the code could be still buggy. Remember one thing - only the code you don't write can't fail. The less code you write, the better.
Other aspect of TDD is rigidness of the solution. Any code change, and changes are the only thing that doesn't change, usually results in either retiring some tests or worse amending them. So developers try to resist changes more. They know that they will have to not only change the code, but also God knows how many tests directly or indirectly related to the code they're about to change.
TDD also leads to the new blame culture. You commit and now tests are broken. And your name is dragged around like if you were worse than Usama Bin Laden. Some organizations go to that lengths as to punish (or promote) developers according to build failures. So it pays off to sit in the corner and touch nothing.
So instead of fighting this futile battle let's just accept we can't win here by simply planning our defenses. This is akin to playing a game of chess by simply laying out the best possible defense scenario. No matter what, you can't win this way. You have to go out to the battle and prove you can win.
I mean don't try to avoid the battle. Do not try to defer the battle by writing super defensive code. It only slows you down, distracts you from the real target. Focus on writing maintainable code, focus on quick release cycles (ideally be able to only minutes between committing you bug fix and this bug fix being released to test and prod environment), focus on clean code, use defensive programming for mission critical modules and so on. Because what really matters in the end of the day isn't bug free code but your ability to quickly fix bugs for only one thing is sure in this world - your code is buggy!
To avoid any confusion - I'm not against TDD per se (and definitely not against testing in general). It makes sense to have tests making sure core parts of your functionality (usually something related to finance calculations) are covered by tests. It also makes sense to run automated sanity testing after your CI build.
Addendum: This has nothing to do with test frameworks. They are actually quite good when it comes to ad hoc testing and verifying some functionality both in isolation (unit) and as part of a bigger system (integration).
It also makes sense to have plentiful of tests if you're lucky enough to work on a framework, library or something which is intended to be highly re-useable.
No comments:
Post a Comment