Unit Testing And TDD: Too Different To Compare

Here’s another post inspired by James Grenning and his “TDD with C” tutorial. Going through an abbreviated version of that tutorial last year in Salt Lake City at Agile2011 was what got me started with TDD. It inspired TDD month on www.AgileSoC.com last November. It encouraged me to get knee deep into SVUnit and TDD of verification IP. It also, in part, motivated my Agile2012 submission: TDD and A New Paradigm For Hardware Verification.

My second time through it was at ESC in San Jose this past March. This time around, I found myself fascinated by some skepticism in the audience when a fellow piped up with the comment “I see the value of unit testing but it’s hard for me to see the extra value of TDD”.

It was easy to see it wasn’t the first time James had seen this kind of skepticism during one of his talks. His response was “My goal is to show the value of TDD… but if all I can do is convince you there’s value in unit testing and that’s as far as you get, that’s great, too. That’s a start.”

I’m guessing James gets grilled periodically about the differences between unit testing and TDD. Since ESC in march – it’s been about 2 months now – I’ve been thinking about how I’d answer the question that fellow posed to him: what’s the difference between unit testing and TDD?

I think I finally have a reasonable response.

Let’s pretend you’re starting with a team that has absolutely terrible test habits. Over time, however, you and your new team steadily transform yourselves into a human powered machine that consistently cranks out high quality code. During the transformation, we should see the differences between unit testing and TDD are greater than the similarities. Also, that it’s more than unit testing or TDD that determines product quality.

It’s a roundabout way to get to the point but trust me, I think we do get there :).


Give your customer untested code: It’s not much of a start but it’s the start nonetheless. You don’t bother testing it – no TDD, no unit tests, not even integration tests. Just ship the code and let your customer find the bugs.

Do we need to go through the reasons why that won’t work? Probably not.

Let your QA dept do some integration testing: The good news is that your QA dept is going to do some integration testing this time before code goes to your customer. The bad news is that QA is bogged down with a ton of buggy code already and they’re not going to get to your buggy code for another 4 months. That means by the time they come to you with questions and bugs, you’ll be working on a new release and will have forgotten about most of the code you wrote.

Bug reports come in steady as you’re reacquainting yourself with your own code. You’re also stuck multitasking between the code you’ve forgotten about and your current obligations. Time passes and you eventually work through all the issues found by QA and release the code several months later than planned (though your customer will soon point out that the quality is depressingly poor). Now you have an angry customer, you’re late and behind the 8 ball on the new release and the time QA has spent with your code means they’ll be even later getting to the next batch of buggy code.

Add some best effort ad-hoc testing: If the code you dumped on QA wasn’t so bad, maybe it wouldn’t take them so long to release it? Adding a few dedicated testers earlier to verify a few low level details should fix that up.

It does… kind of… but not as much as you hoped because you’re partnered with people you’ve never worked with, they live in a different time zone and there are no real testing goals. On the upside, they’re seeing your code 3 months after your wrote it instead of 5 (you guessed it… they were also buried under a previous project) so at least the code is a fresher in your head, though just barely. It’s still going to take you a while to reacquaint yourself and the bug reports will still be rolling in and disruptive to your current priorities. You end up marginally better on time and quality but still woefully inadequate.

Add unit testing: your customers are angry because your product quality is just plain bad and you’re later than late. They’re giving you another chance though. It’s time to add some low level acceptance testing and this time, it’s going to start much earlier, as the design is being coded up. Surely unit tests will help…

Not quite, though they may have had you and your testers not gone off to your respective cubicles (and countries) for 5 months of coding in isolation. With code and tests “done”, you assumed smooth sailing as you put them together. Instead, reality struck and everything exploded. A long list of bad assumptions and months of isolation has left you with just as much buggy code than you’ve ever had previously. Months of time wasted on test debug and rewrites leaves you’re hardly much further ahead.

Add communication: Surely you’re on the right track with the unit tests and it’s just a matter of coordination and communication that’s bungling things up. More meetings to keep things on track, that’s what you need; and status reports, too, status reports will definitely help.

Or maybe not. You’ve added more meetings so people are talking at least. And the status reports make everything look better… right up until you put the code and tests together. You find similarly bad assumptions and everything bursts into flames, again, just like it did before. Nothing works. Little improves and the meetings and status reports make you feel like you’re wasting your time.

Add better communication: You were probably right to think communication was the problem, just wrong to think more meetings were the answer. Time to admit the faceless remote arrangement you had wasn’t working. Finding the right time to talk was always hard and seeing each other twice a year for a week at a time wasn’t enough.

The testers are in the office now. They may be on a different floor, but they’re in same time zone, keep the same hours and eat in the same lunch room. You’re not talking strictly by email anymore, you’re not using webex for meetings and you’re not waiting overnight for answers; you’re having face-to-face conversations regularly, meetings are in the conference room and you’re illustrating conversations with help from a whiteboard. Quality is improving and the unit tests you’re writing seem to finally be helping the situation. You’re still late, but release frequency is improving and you’re starting to gain credibility with your customer.

Add goals: It’s good to be goal oriented, right? So you get serious and add a few goals. Your testers are going to make a list of all the unit tests they want to write. They’re going to write 5 tests/week for 5 months. When they get to the 5th month and they’ve got 120 unit tests, they know they’ll be done. Similarly, you’ve set targets for the code and when you hit those milestones, you know you’ll be done. Right?

What the heck is going on? You’ve got all our unit tests and you finished them on time. You have the code finished (a week early even) but you’re still having no luck when you put code and tests together. Your’re taking testing seriously, you meet regularly, you’re goal oriented and you’re still late.

Nothing is working. You put your heads together and collectively that having goals is not enough because the goals have nothing to do with each other. When you start running tests there’s just so much code; you’ve waited too long to test it and you’re finding out way too late that none of it is any good. Seems all this time the progress and improvements you thought you were making were an illusion. You’re frustrated.

Share goals: Just as the situation seems hopeless the light goes on: share goals and run your tests sooner! That’s what you’ll try next! Instead of writing code for 5 months at a time, you’re going to write code for 1 month at a time. And the unit tests that are written during that month are going to apply to that month’s worth of code so you can actually test and fix it before you forget about it and move on.

Much less time wasted debugging and you’re almost delivering on time. Your customer is actually happy now.

Real progress. Finally.

Add collaboration: You’ve already figured out email isn’t the best way to talk and walking down the hall every time you have a question is getting annoying. Everyone knows each other and it’s starting to seem silly that you’re sitting in different areas of the building. You need to be closer so your team goes to your boss and suggests a move is in order.

Bingo. Now when you have a failing unit test, testers swivel around, roll their chairs over and show you the problem. You’re running unit tests once a day and working through problems as they come up instead of saving them to the end of the month.

Periodically you think back to the days it took you several months to run your first unit test and shake your head. It seemed like a logical way to do things but reality crushed you every time. You’re at the point now where you’re running your first unit test after only a few days and everyone is working together to make it happen.

Add responsibility: After coming all this way, the last key to the puzzle suddenly becomes obvious. Everyone is working together to diagnose and fix bugs much faster than you ever have before and the effect on productivity, moral and customer happiness has been immense. The improvements have been immeasurable and yet it’s taken until now to discovered the final big piece to the puzzle.

Needing someone else to tell you your code is right or wrong is avoiding a responsibility that you should have embraced years ago. You’re tired of people feeling like it’s their responsibility to fix your code and you’re ready to cut out the middle man; you’re going to make sure your code is written right in the first place and you’re going to use TDD to do that.

Soon after starting with TDD, bugs are no longer an expectation because they’re not even allowed to exist. You’re regressing your own unit tests several times a day and running acceptance tests written by the testers every night. Your testing small changes to the code immediately and all the wasted debug effort is effectively gone (in fact, you haven’t even logged into the bug tracking system for over a month).

Now you think back to the time it took a few days to verify your code and shake your head. With TDD, it only takes a few minutes!


I did said “roundabout way to get to the point”, didn’t I?

A couple things to highlight that get back to my original question: what’s the difference between writing unit tests and doing TDD?

First with respect to time, you’ll see that the time between when code is written and when it’s tested steadily decreases as our fictional team improves its habits. Waiting for your customer to fix a bug starts the longest possible feedback loop. Adding goals for unit tests was an improvement but bugs were still allowed to live for several months. It’s not until a team shares goals and adds collaboration that those months turn into days. With the added sense of responsibility, TDD turns days into just minutes.

Less time between when a bug is added and when it’s fixed is also important because is means less time for your mind to wander on to other things. Waiting for your customer to find a bug gives you enough time to think about any number of different tasks. With TDD, though, there is no time to wander because everything is done in the moment. There’s no context switching; code is fresh in your head and there’s no time to forget what you’ve done.

Finally is responsibility. It takes a while but the developers on our fictional team finally realize the bugs they inject into their code are a burden that they are imposing on their team and their customers. Unit testing doesn’t address that burden. Having somebody else write unit tests to validate your code might seem a decent response to quality problems but it’s also just passing the buck.

To wrap up, I hope the trials and tribulations of this fictional team help people understand that unit testing and TDD are not as comparable as it seems. With TDD you see issues immediately, before they become someone else’s problem. With unit testing, that’s not the case. Depending on how well a team functions, it can take hours, days, weeks, even months to get feedback from a unit test.

That’s the kind of difference you can’t necessarily see if you’re just looking at the results (i.e. code and unit tests)… which is where, I’m assuming, the skeptics sitting through James’s TDD tutorial get a little stuck.

-neil

7 thoughts on “Unit Testing And TDD: Too Different To Compare

  1. Excellent post.

    Over the years I’ve gone through this exact learning process myself on a personal basis. In practice I find most people are still somewhere at the 4th or 5th step. Often giving up in frustration and coming to the conclusion that TDD just isn’t worth the effort because the results aren’t there. That is if you consider unit tests to be the same as TDD…

    The trouble is that it did take effort to really try TDD. I had to accept things are difficult because I was refusing to believe the red-green-refactor steps for TDD are needed. Surely I’m a smart individual who can understand a little bit of code without having to document it in detail.

    I’ve been wanting to write a blog about my TDD learning process for a while as well. Coincidentally also inspired by Grenning’s TDD wisdom. Though it was hard to put to words.

    Thank you for an insightful explanation of just how hard it is to see all the baggage you take on by not practicing TDD.

    1. sebastian, thanks for the comment. I think baggage is the right word (TDD does seem to be a great way to cut a lot of baggage loose). And I think you’re probably right that steps 4 and 5 are pretty common. In hardware development, I’d say “add goals” at step 7 is about as far as most get. To bad it’s so hard to see that they’re not usually the right goals :(!

      thanks again!

      -neil

  2. Nice post, Neil.

    I think it’s worth pointing out that ‘classic’ TDD doesn’t generally address integration issues. I may be playing the pedantic card here, but that terrain is more usually described as ATDD or automated integration testing.

    1. good point… though indirectly, higher quality code makes the integration effort run much more smoothly :)!

      -neil

  3. Excellent post. Unfortunately, many developers think they must do TDD and design for testability and if not they shouldn’t get near unit testing. However, unit testing, whether new or legacy code, is still an option even without test-first design, especially with unit testing frameworks like Typemock.

  4. Great post.

    As i’ve started prqacticing TDD i’ve found out i have much more confidence in my code and less bugs. It also makes you think more about the api and design of your code.
    Concerning the unit testing it self, i still do it for the legacy code, whenever there is a smallest posibility to write them.. Even if not written in test-first maner i think it still helps a lot by inspiring confidence and allows you to make changes and knowing when you destroy some existing functionality.

    Cheers

Leave a Reply to Seb Rose Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.