If you’re unfamiliar with TDD, the first and most obvious thing you’ll notice is that TDD breaks a boundary that many hardware teams – ASIC teams in particular – consider absolutely unbreakable.
With TDD, the person that writes the code, tests the code.
Before you rattle off all the reasons why it doesn’t make sense for people to test their own code, let me give you some background and attempt to explain where we’re going, both in this post and with our wild scheme to introduce TDD to hardware engineers!
I’ve known about TDD for a few years now. My AgileSoC.com counterpart Bryan Morris presented a SNUG paper in 2009 on TDD and the SVUnit framework he and his co-author Rob Saxe put together. I’ve done an Eclipse/Java/TDD tutorial that I found useful. I’ve also read a little about it and I know it’s very widely used in the software development community. The signs were all there but none of the above were convincing enough to get me to use TDD.
In August, at the Agile2011 conference in Salt Lake City, that changed.
On day 2 of the conference I sat through a tutorial delivered by TDD expert James Grenning. Seeing an example of how he practices TDD and being able to ask questions jarred a few ideas loose and opened my mind. For the rest of the conference, I wandered around – aimlessly at times – sketching a mental picture of what hardware development could look like with TDD and scrounging for ideas from others (namely Elisabeth Hendrickson who got me thinking critically about the similarities between software and hardware testing).
It’s time to start putting that mental picture to paper.
I’ll start with the disclaimer that just because TDD works in software development doesn’t automatically mean it’ll work in hardware development. That said, it’s hard to ignore the motivation behind using TDD would be very similar between software and front-end hardware development. So let’s look at why it works in software, and see if we’re reasonable in hoping for the same benefits when we apply it to hardware development.
The simplest benefit of TDD is that designers immediately validate correctness through very short test-and-code feedback loops. In the tutorial James put on, a short feedback loop could be as short as a minute or less and it goes a little like this…
- Write some test code, run it and watch it fail (because the code it’s testing doesn’t exist yet).
- Write just enough design code, run the test again and watch it pass.
- Repeat steps 1 and 2 until done.
Not quite believing test-and-code cycles could be as short as a few minutes – which turns out to be just a few lines of test and design code – I asked James if the exercise resembled what he’d actually do in practice. His answer was a pretty firm yes. It was close to what he does in practice and other TDD practitioners in the room agreed. From that, I gathered that one part of TDD is writing and validating code in baby steps… very small baby steps. Instead of writing code for a few days/weeks/months before testing it, you write only a few lines and test it right away. Obviously, the opportunities for bugs to creep into the code base decrease substantially since TDD effectively helps you find and fix them immediately. Bug prevention is a major benefit, but it’s also just the low hanging fruit.
Better than just validating correctness, TDD is a design technique that helps people think in terms of how a design will actually be used. As you’re thinking about how <feature A> is going to be used, you capture usage scenarios as tests first, then write just enough design code to pass the tests. The change in perspective should translate as more robust design code. It should also help people avoid over-engineering a design, especially with a specific focus on writing just enough design code. Just enough gives you what you need; no more than that. Less complexity. Less code to debug. Less wasted time and effort.
If TDD is completely new to you and you find yourself asking:
“do you really write the tests first?”…”does just enough really mean just enough and no more?”
…this comment snippet from James’s tutorial should answer that:
* With the previous tests passing you should have at most a counter * that knows how many integers have been added to the buffer, with * hard coded return value for Get() * * If you have more, delete it now! It is not tested code, you * are supposed to be doing TDD!
That means you don’t write design code until there is a test ready to verify it. It doesn’t get clearer than that; TDD really does mean test-driven.
The tangible results of TDD, in case I haven’t made it obvious, is a complete design unit AND a suite of unit tests. The tests are automated and run regularly just as most of us already run regular regressions. The granularity of these unit level tests is finer than what is normally done in hardware development (they would apply to the module/class level as opposed to the block/chip level most teams identify and test). The tests aren’t temporary; they live on with the design. If the design is ever changed, the tests are there to ensure nothing breaks.
As far as I’m concerned, the motivation behind using TDD and the reasons for its success in software development can be applied directly in hardware development. Admittedly, it’s taken me a while to come around but I think I’m finally onboard. The challenge from here will be to mangle the design and tests roles so we can find the right fit. That could be tough but Bryan and I have some ideas that we’ll be posting through the rest of the month to get the discussion started.
To summarize, is it really accurate to say that with TDD, the person writing the code tests the code? If you view TDD strictly as more tests, then yes, that’s all it is (and yes it does break the designers shall not test their own code commandment that most of have lived by for a long time). But if you view TDD as we do, you see it bringing a new design technique to hardware development (which just so happens to capture design intent as a list of tests). There’s a big difference between those two views.
There is value in adding designer written tests… but there’s more value in an improved design process.
That’s all for this post. Stay with us through the rest of November for more TDD on AgileSoC.com!
Q. Are the benefits of TDD similar in software and hardware development? Or is TDD a design practice that doesn’t translate?