Tuesday, March 08, 2016

What?! Pre-interview code challenge?!

It doesn't matter how many years of experience you have, how many coding challenges you've solved in your spare time or how many open source contributions you've made, companies still want you to complete their tests or coding challenges. We can debate it but there's very little we can do about it.

But then I bumped into a new beast - pre-interview tests. You write a coding challenge in order to secure an interview. In other words, don't bother to send in you resume unless you spend several hours writing a program. And as in this instance (I'm not going to disclose the company's name) they ask you to make it public on github. Great idea ... if you want to analyze other people's approach :-) And that's what I did.

After analyzing several repositories I found a few patterns. Their common denominator is:

People don't really read requirements.

And here's why.

  • The code should be written in Java but you can see people trying to outsmart by providing Python, Go or Haskell version. It may be fine as a supplemental implementation to impress the recruiter but not on its own.
  • None of the implementations I could check came with any tests at all! Unit, integration, anything. That's a very bad sign. Although I personally don't advocate TDD I still think that tests are important, especially when it comes to coding some system logic. They also speed up the development cycle as you don't need to build and run your application just to check it the code works.
  • Although people were asked to structure their application as if it was a production grade application some still put everything in one class.
  • Missing documentation. At least javadoc, better yet stand alone handcrafted document outlining how the application works.
  • Build instructions. People rarely bother to write quick instructions about how to build the application. Although it may be clear that gradle/maven has been used it's still nice to provide this in writing.
  • Some assume that a CSV document can be created with simple List.join(",").
  • Some even assume that parsing JSON means splitting a string by ,.
  • People ignore application return values. They just spit out an error message and bail out (so the return value is 0 indicating successful completion).

Advice to people applying to those companies. It's unlikely you're the first one applying. Search github, bitbucket, gitlab and the likes. That can give you an idea about your competition. It could also help you out with your code should you get stuck.

And a closing message to companies trying to make it easier for them by asking people to write code before applying. Asking people to code first, massively limits the pool of people applying. There are thousands of great developers who don't want to spend hours coding just to wait if you come back to them. I'd also say that it attracts wrong kind of people.

  1. people who like to solve challenges and have time to do that but are not really into a long term relationship
  2. people with little experience hoping to skip the line
  3. cheaters who smell a chance of landing a job

I still think it is much better and efficient to do a quick screening (via skype or the likes) and ask a candidate to join you for a day (preferably pay the person for working with you that day).

Friday, February 12, 2016

A farewell to Node.js ... for now

It's tough. Choices have to be made and decision taken.

After spending years experimenting (and even professionally working) with Node.js I think it's time to say farewell and thank for all the fish.

Why?

There's several reasons but the main for me are:

Asynchronous IO is tough. Really tough. Callbacks, Promises, Async/Await all that stuff. Sure, Async/Await will make this easier but under the bonnet it will still be async. I think the cause of this is that we think sequentially - do this, do that do something else. Not do this for user1, do this for user2, do this for user3, do that for user1, do that for user2 ... The fact that we serve several users at a time doesn't have to be reminded to us all the time.

Debugging and testing is pain with callbacks. As async functions are normally executed in their separate threads you lose continuity when your functions resumes. It's hard to track down where you'd been before the callback got called. And simulate callbacks in unit tests doesn't help either.

Dependency management. NPM everywhere. Node.js doesn't have its own packaging, module system and relies on NPM to do the hard work. Problem with NPM is, that if package A depends on package B and that depends on package C it's all nested and not reused. That means, you can end up with hundreds of packages C in your project. I still remember my Ghost blog project. It downloaded 500MB of NPM modules just to run!

Spartan core libraries. Node has very limited number of standard functions. It means that even for simple tasks like working with dates you have to turn to libraries like moment.js. Most of the stuff you can find in standard libraries in other languages needs to added as a dependency in Node. So you always facing decisions like `lib1 has these features but lacks some other` so you mix and match all the time to get basic stuff done.

No opinion. We all know that today's patterns and best practices are tomorrow's anti-patters. But this cycle is massively accelerated in the Node.js world. Grunt, Gulp, NPM. This is rapidly changing and there's no sign of consolidation. It's hard to operate in a world where tools are discontinued on a daily basis (with no replacement). Node needs something that would define and guarantee the basics so you can focus on programming and not worry if your code will be buildable in a year's time.

No explicit types. We can probably debate this for ages but as there's no way to enforce types (at least optionally) it's very hard to define contracts between libraries and their users. There's nothing that would warn you that a return value can't be assign to your variable. Or calling a function isn't possible because of incompatible types. That puts extra pressure on library authors to well document their code so the types are documented, somehow.

To sum it up. On front-end I'm left with no choice - it has to be JavaScript or transpiled JavaScript. For backend though, I stick to something else.

Wednesday, February 10, 2016

Striking the right balance

We all know that software can be really complex. Software solves complex problems and needs to be complex. Period. But when it comes to our toolkit we should always prefer simple, streamline solutions.

Software development can sometimes morph into some kind of competition. Some developers and sometimes whole development teams can turn into macho style super heroes producing pretty complex code just to show the world how good they are. So you want an example - look no further than AngularJs. That's what happens when smart guys over-engineer their solution.

Dependency injection for a functional language? Services, providers, factories? Parsing function bodies to figure our parameters? You get all this and more. The question is why? Functional languages always prefer explicitness before implicitness. Functional approach tries to eliminate auto-magicalness. And why have so many service types? Who needs to use magic string constants to describe what a directive can be applied to? Yes, we know the guys behind Angular are smart but do we need to worship them everyday?

Again the same story. All you get is something to learn. And once you learn it you the guys release version two do zero with no backward compatibility. And you start over again.

On the other end you have things like Hazelcast. You don't need a diploma to use it. You plug it in as a library or use it as stand-alone application and it just works. Don't be fooled. Hazelcast is far more complex than Angular. Both in terms of code and the problem area it covers. But the guys there don't try to awe you. And that's the trick. That's why React is so popular.

So the rule is - don't try to convince your users that you're a smart guy by writing complex difficult libraries - convince them (if you really need to convince anyone) with simplicity.

One rule of simplicity is to reuse whatever is out there or at least allow this existing tools/libraries/utilities to be used with whatever you're coding.

Don't be revolutionary - be evolutionary.

Wednesday, July 15, 2015

Why testing is good and TDD evil

In our lives nowadays we have safety nets everywhere. We are scared and live in a world of scaremongering. Many people are even scared to go outside. They just sit in front of their computers afraid of failure. What if I'm not seen in the best light? What if my application fails? What if someone sees my code's not perfect?

But the problems are not failures, errors and epic fails - the problem is our inability or unwillingness to accept they're here to stay. It's about how we respond to failures not about how we make sure there are no failures. It's about us accepting failures are inevitable then spending time writing tests for every possible scenario so we have excuses.

Let's be brave - hope for the best and be prepared for the worst. Roll up our sleeves take a leap of faith and change that bit of "untested" code. If there are problems let's fix them. Have a roll back solution at hand so you could revert back if your code change breaks things up.
TDD is an epic failure. And rightly so. It instils false sense of security through passing tests and creates culture of excuses through so called "test coverage" and "code metrics" - and we all know how bosses like metrics, don't they.

Let me repeat again - TDD is not cure. You can find bugs in every software. TDD was meant to eradicate bugs but the bugs are still around. TDD actually makes it harder to tackle bugs because all your passing tests lead developers to believe that the bugs are actually caused by infrastructure, 3rd party code and so on. You can always come around and say "but all my tests are passing so it's not my problem".

And let's be clear here. Tests are code. The same code we try to tests. So why your code may be buggy and your tests are always right? And what does it mean 100% test coverage? Is the code really tested to work correctly (whatever that means) with all possible input parameters and changing condition? To give you an example have a look it this Java code:

public class Adder {
    int addOne(int input) {
        return input++;
    }
}

public class AdderTest {
    @Test
    public void addOneWorksWithNumber1() {
        int result = Adder.addOne(1);
        assertEquals(2, result);
    }

    @Test
    public void addOneWorksWithNumber0() {
        int result = Adder.addOne(0);
        assertEquals(1, result);
    }
}

Is this code thoroughly tested? It appears so. Test coverage is 100%, tests passing but what value the test gives you? To answer that lets add this test:

@Test
public void addOneWorksWithVeryLargeNumber() {
    int result = Adder.addOne(0xFFFFFFFF);
    assertEquals(0x100000000L, result);
}

It's not passing! And you could probably test other edge cases. In short, we tests ONE line of code, we've already written 30 lines code to test it but we're still not 100% sure the code does what we expect it to do. So the tests are just a smoke screen. The code can still fail, it's not tested for null values, for multithreading, for runtime exceptions and so on. You could spend years of writing tests and the code could be still buggy. Remember one thing - only the code you don't write can't fail. The less code you write, the better.

Other aspect of TDD is rigidness of the solution. Any code change, and changes are the only thing that doesn't change, usually results in either retiring some tests or worse amending them. So developers try to resist changes more. They know that they will have to not only change the code, but also God knows how many tests directly or indirectly related to the code they're about to change.

TDD also leads to the new blame culture. You commit and now tests are broken. And your name is dragged around like if you were worse than Usama Bin Laden. Some organizations go to that lengths as to punish (or promote) developers according to build failures. So it pays off to sit in the corner and touch nothing.

So instead of fighting this futile battle let's just accept we can't win here by simply planning our defenses. This is akin to playing a game of chess  by simply laying out the best possible defense scenario. No matter what, you can't win this way. You have to go out to the battle and prove you can win.

I mean don't try to avoid the battle. Do not try to defer the battle by writing super defensive code. It only slows you down, distracts you from the real target. Focus on writing maintainable code, focus on quick release cycles (ideally be able to only minutes between committing you bug fix and this bug fix being released to test and prod environment), focus on clean code, use defensive programming for mission critical modules and so on. Because what really matters in the end of the day isn't bug free code but your ability to quickly fix bugs for only one thing is sure in this world - your code is buggy!

To avoid any confusion - I'm not against TDD per se (and definitely not against testing in general). It makes sense to have tests making sure core parts of your functionality (usually something related to finance calculations) are covered by tests. It also makes sense to run automated sanity testing after your CI build.

Addendum: This has nothing to do with test frameworks. They are actually quite good when it comes to ad hoc testing and verifying some functionality both in isolation (unit) and as part of a bigger system (integration).

It also makes sense to have plentiful of tests if you're lucky enough to work on a framework, library or something which is intended to be highly re-useable.

Friday, May 29, 2015

Of corporate software development cycle

This piece is based on my experience working for big corporations but somehow, surprisingly applies to smaller establishments too.

Now for a while imagine a company. This reputable company with established business model has a portfolio of applications to support that business. From of-the-shelf Microsoft Exchange server and file sharing to customized boxed products to bespoken applications. Now this company has a system, let's system A, that helps with the core od the company business. Be it selling stuff, taking orders, managing customers. The system A is pretty hairy one - has been around for a while. As the company's employees come and go the company finds itself in situation where the core developers (and subject matter experts) have left the company but the company still needs to maintain the system.

First, they try to understand the system. But it's hard. Lots of these systems are a) quite complex b) poorly (if at all) documented c) written long time ago hence using languages and frameworks no longer used. All in all the most sensible approach is to actually rebuild the system from ground up.

To do so, the company hires managers. First, managers come up with business case. Sometimes they skip this step, but that's unlikely. Once business case is approved, they come up with things like governance, procurement et cetera. Obviously they also introduce planning and estimation. In most cases, no developer is involved. And all the reported indicators show green. One of the first steps in the process of superseding the system A with system B is to call in a consultancy company. This company quickly rubber stamps the need for replacing system A highlighting many shortcomings of it. Based on their findings a PowerPoint presentation is made and shown to the whole company (department) that clearly shows incompetence and ill-thinking of the authors of the system A. Many times the presentation contains analogies to old cars, houses, broken tools - you name it, so it's absolutely clear that system A is beyond repair and the authors should be punished. After the system is has been successfully humiliated it's time to build the new system B.

It all starts with requirement gathering. It's complicated. SMEs (subject matter experts) are hard to get hold of, and if you can get hold of them they sometimes don't remember everything, sometimes even contradicting to each other. But you do the best to figure out what's needed to do. After this painful exercise you move on to the development phase. All indicators are still green and everything goes according to the plan.

You usually pick the best developers with some experience with the system A. Any work on system A is put on hold in anticipation of the system B. System B will be coming soon so it makes no sense to improve it or dilute the effort. 3 months down the line and indicators go to amber. No matter what technology, the experience of your developers, there's always challenges, changes, unexpected complications. Your system B is much more complicated requiring new or hardly tested technologies. To add insult to injury users of the system A are really nervous and want their new ideas to be implemented into it. It's getting more and more difficult to explain the benefits of system B and why it is worthwhile to wait for it. So finally some developers or hired/diverted to support system A. That further slows downs progress of the development of the system B. First, you lost some hands, second you made the development of system B a moving target. As new features are added to system A you need to add them to system B as well.

At this stage many managers get nervous and start to go for compromises. No more TDD, no more free weekends, no more this, no more that. That's the disillusion stage. The hardest one. Also the breaking point. Decision made during this stage shape the delivery/not delivery of system B. It's up to the product managers, organization, processes and other factors if the system B is going to be delivered or shelved and people participating blamed.

There's a few things you should keep in mind when replacing some older core company system.

1) Even during the reimplementation phase it's the system you're replacing what pays the bill - so be considerate.
2) Never stop working on the system you're replacing. The replacement system may never be replacing the original one and you don't want to lose out.
3) Appreciate that the the replacement system will lack all the features and complexities the original system has.
4) Avoid any big bang releases. Shout when you here this "System B goes live on Saturday and we decommission system B the day after that". That smells.
5) It usually helps if you replace one aspect of the legacy system at a time. Just to prove the system can be replaced in its entirety.
6) Give user quick demo as you're building the replacement system every now and then. To keep the users in loop and engage. You going the need the users' support to get you through the teething problems phase.
7) Make sure there's sound reasons for replacing the legacy system and all other options have been exhausted. "Java is too verbose", "it doesn't support HTML5", "it's not an SPA" are hardly considered as sound reasons.
8) Before starting to implement the new system make sure it's the user who actually want it. Make sure it's them who make the IT team to write it.
9) Virtually all medium to large size corporations have their own bespoken or heavily customized systems. Be very suspicious when the rain sellers start to appear to sell you the of-the-shelf solution that needs to be tweaked a bit and will do exactly what you need.
10) I'm sure there's thousands and thousands more things you should be aware of.