Tuesday, September 10, 2019

All tests should fail

So I've been staring at the screen trying to write something about SQL compatibility in migrations for a while, and it's just not flowing today. I had this topic in my backlog though, and it's something I feel strongly about...so here's a short and sweet one!

A lot of people are skeptical of TDD. I'm not one, but I get it. Writing tests first is hard and takes practice, and honestly we can't really prove yet that TDD as a process gets you better code. However, there is broad consensus that having automated tests is a good thing: it saves time manually testing and it documents features that might otherwise be forgotten. It seems to me, even, that it's become almost a bit embarrassing (at least on the open 'net) to not have a bit of a test suite.

So let's throw out TDD. How do we write valuable automated tests?

Tests should fail

If you write a test and it doesn't have an assertion, or perhaps a valid end state for a UI test, it's not a valuable test. If it asserts something that will always be true, it's not going to help you. What I'm getting at is that you need to see the test fail.

Write your test and then break your code. Seriously. If you're checking that a boolean flag changes the result of a calulation, go invert the boolean in code and see if it breaks. Change a div id in the UI and see what happens.

Tests should fail for the right reason 

If your test fails, but it fails from a null reference when you were expecting a wrong answer, that's not a good test. Well...ok - it's better than not failing at all! but still, it's a broken test. Check that you're getting the error or incorrect result you expect - at the very least, it'll help you fix the error faster.

Tests should fail, for the right reason, informatively

I never write a test without a failure message. I've also been to the moon.

Yep, we all neglect informative failure messages. Ye olde assertEquals(actual, expected, message); doesn't get a lot of use. Never fear - there are other less annoying ways to get informative tests! You can:

  • Name your tests after what they expect! this is especially easy in more BDD-style frameworks like Jest, where it('throws an error if no username is provided') says a lot.
  • Use fewer assertions in a given test. That makes it a bit easier to see what went wrong - not because the test won't tell you what broke, but because it's easier to comprehend what the test expects. It's a readability thing. Split up tests, if you have to.
  • Look for ways to get the code to tell you what's wrong without you typing messages in every single test. For example, in Java you might implement toString on an object you make lots of assertions over, so that you can look at the test output for more context. Or, you might pull in a list assertion helper that prints something more useful than "lists are not identical". Write your own assertion method that generates a message for you, and use that in multiple places.

I'm not the first to write about this, and definitely not the best. Google around and you'll find many, many, many, many, examples of how to write tests. But, I like talking about tests, and maybe this will help jumpstart a few ideas of your own for writing tests that make your software more valuable.

No comments:

Post a Comment