Welcome back for TDD month round 2! In our first post, Test Your Own Code!, we laid out some of the motivation for using TDD with some general discussion of the mechanics and benefits. If you read Test Your Own Code!, hopefully you’ve put some thought into where it actually fits into hardware process because that’s the topic we’re tackling today. I don’t think it’s trivial, but we can definitely squeeze TDD in. Here are my thoughts.
NOTE: I don’t mean to devalue or ignore the potential of TDD as a design technique in hardware development (for software teams, TDD is extremely valuable as a design technique), but we’re not going to talk about it much as a design technique here. I’m going to focus on the resulting unit tests and the effect they have on the verification paradigm we have now.
The first mental hurdle I had to jump when it comes to fitting TDD into hardware development has to do with scope because I’ve never been clear on how far software developers go with TDD. Does it apply to a single module/class and no further or would it apply to a subsystem (i.e. block level)? How about at chip level?
I think I now have the issue of scope figured out. As obvious as it feels now, it wasn’t always obvious because I was stuck thinking in terms of the block and top level testing paradigm that most of us are used to. TDD is unit testing which applies to the module/class/entity level. The tests that fall out of TDD, therefore, would correspond to a finer design granularity and consequently would be an addition to what most hardware teams do currently.
Test suites we have now include block and top level tests. TDD means the addition of unit level tests with a granularity that is finer than block level tests.
The other questions I had apply to rigor. Is TDD used for exhaustive testing? Or does it take you <so far> – however far that is – after which a test expert carries on with the exhaustive testing? I think I have that figured out as well, though there seems to be a lot more grey area when it comes to rigor.
From side discussions I had at Agile2011, I got the impression that software testing 10-15 years ago looked a lot like what we do today; designers wrote the design and testers did the testing. Fast forward to present day… teams using TDD share responsibility for the entire test effort between design and test experts. But how is it split? To build my picture of how responsibility is split in a hardware world that incorporates TDD, it’s helped me to think about testing as an accumulation of 3 different types of tests.
- First Order Tests: first order tests aim at the lowest level of detail. To me, these tests are the bare minimum needed to properly claim you’ve done something useful. A first order test would clear the compile/simulate hurdle and then verify some very (very) basic design function.
- Example: put 1 element in a FIFO, get one element out of a FIFO.
- Second Order Tests: second order tests go beyond the basics but still focus on the known. They would verify all the features that the design is specifically intended to handle.
- Example: fill a FIFO and verify the full status is asserted, then empty the FIFO – checking validity of each element as you go – and verify the empty status is asserted at the end
- Third Order Tests: third order tests are designed to cover the unknown. Third order tests could come about by asking “we’ve done <this> but I wonder what happens if you do <that>”? Is there a lock-up or some sort of data loss, does the design get confused at all or does it handle the scenario without a hitch?
- Example: simulating behavior of some peripheral, alternate between a random number of puts and gets for an extended period of time around the empty and full thresholds to verify there are no memory leaks.
The first order testing goes to designers. Not only will they use TDD to build code that is simpler and more robust – remember, TDD is a design technique first! – they’ll have tests that find and eliminate all the simple mistakes (i.e. the first order bugs).
The third order tests go to dedicated verification engineers. Because the third order tests would tend to model the real world interactions, they are better done in top level testing. If efficiency of top level testing is an issue – due to long sim times or lack of control and/or visibility – targeted block level testing becomes the alternative.
Drawing the line through the second order testing is the tough part. I think designers take responsibility for most of these. When a designer is done writing a module with TDD, there should be high confidence that it’s robust in isolation. I think verification engineers could take some responsibility as well, though, by complementing the unit tests with block level tests as required. The block level tests would look similar to what we build now and would be written for design features complicated enough to require more effort and a second perspective.
Neil’s point: With the added emphasis on unit testing, my feeling is that block level tests suites of the future will be just a subset of what we build now since we’ll already have verified some of what we currently do at block level.
Bryan’s counter point: I’m not so sure… I think we’ll still need the more complicated, involved block level testing. It’s just that we won’t be spending so much time in debugging the little 1st order stuff (on the design and verification side if both are doing TDD) – so we can invest more time/energy in developing better 2nd and some 3rd order tests. I’m not convinced it will be a subset. In fact, there may be a super-set from what we do now because we’ll have more time to develop more complicated tests to explore the dark corners in the design.
Regardless of where the line is drawn in terms of responsibility for second order testing, collaboration and cooperation between design and verification engineers will be key to ensuring testing needs are met.
To summarize our new responsibilities, let’s start with some general rules in terms of <who> is responsible for <what> when it comes to testing in a new verification paradigm that includes TDD. Here’s 3 that I think could be helpful:
- Designers use TDD to build simpler, more robust code with tests that verify all 1st order design functionality;
- Verifiers create top level/integration tests – or block level tests where efficiency becomes an issue – to verify all third order design functionality; and
- Designers and verifiers work together to see that 2nd order design functionality is verified through a combination of TDD, block level and top level tests.
Let’s apply those rules, see where they lead and adjust them as we learn. We’ll look at one last general aspect of how everything fits together in the next post.
Q. How do you see the roles of design and verification change with the introduction of TDD?