TDD Applied To Testbench Development

When we were writing about TDD back in November 2011 during our TDD month, admittedly I had very little experience with it. The goal with TDD month was to spread the word and drum up a little interest in a technique the the software folks have been using successfully for years. I’d used it on a small scale but lacked the experience to back up a lot of what I was writing.

Over the last few months though, that’s changed. I’ve spent a good amount of time collecting feedback from and supporting SVUnit early adopters. That’s been good for getting a feel for how others in hardware development see TDD. Just as important, I’ve been using TDD myself to build testbenches and it’s been going well… very, very well. I’m at the point where I’m going through a pretty repeatable cycle that I think could be useful for others contemplating TDD. I’ve talked about this cycle before, but it’s worth revisiting now that I’ve been through it.

There it is: TDD applied to testbench development. If you’re a verification engineer, this is the cycle you’ll find yourself repeating as you’re building a testbench and verifying a design. It’s equally applicable to any component but I found it particularly useful in building a reference model.

First, the yellow box on the right is where the testbench is built and tested. On a given day, depending on what features I’m working on, I expect to go through that loop several times. That means alternating between adding a new unit test and writing the corresponding reference model/testbench code several times a day. It took a little getting used to, but it was pretty easy to find a groove on the testbench side. Add a test, test should fail, add the feature, test should pass, repeat.

When a new testbench feature is ready to go, I move over to the green box, write a test against the design and see what happens. On the design side, regardless of result, I do a quick eyeball/logfile/waveform confirmation that the results are being reported correctly. From there, if the result is a fail I file a bug against the design (after I do a little digging to see if I can figure out what’s gone wrong). If the result is a pass, I go back to the yellow box and start the whole cycle again.

In terms of the amount of code I write, I find tests are usually between 10 and 50 lines – typically weighted toward the high end. That goes for the testbench code as well, though in the testbench the number of lines is typically toward the low end. Seems 50 lines of test is a little high but I found myself using a few different templates that helped me write the test code pretty efficiently. My ratio of test to testbench code is about 2:1. I don’t know if that’s typical of people using TDD but it feels reasonable to me.

Additional things to point out…

  • Verifying testbench components doesn’t take as long as I thought it would. I was expecting it to be more time intensive that it is.
  • Now that I’m using TDD to build testbenches, it seems completely ridiculous that I used to avoid verifying testbench code. (Repeat: completely ridiculous.)
  • By fixing bugs as I go along, it’s very easy to see myself eliminating days and weeks of debug effort down the road. The silly bugs that used to take me a few hours to isolate and fix now take me a few minutes.
  • I go through the loop in the yellow box several times a day which means I’m regressing my entire testbench several times a day. My code rarely breaks. If it does break, it’s not for more than a few minutes.
  • Regressing a testbench takes about as long as running 1 RTL sim. Because SVUnit runs all the tests in the same executable, it can take more time to compile and load the tests than run them. The turnaround time is very fast.
  • I do find myself re-factoring code as I go along, as expected. It doesn’t take a lot of time but it is something I need to expect every few days, as I come to functional milestones or as I see better ways to structure my code.
  • Admittedly, I should be showing test first on the design side as well. I do do that periodically but not consistently enough to show it in the graphic. That’s something I can work on in future.

Now that I’m into TDD, it’s a technique I can’t imagine not using. I’d expect others to feel the same once they get into it.

Bottom line now that I’m gaining experience with it: TDD of testbenches has helped me improve the quality of code I write. Significantly.

No question.

-neil

PS: The graphic I use here is part of a lunch-n-learn talk I have on using TDD and incremental development in functional verification. If you interested in seeing it, let me know at neil.johnson@agilesoc.com.

Leave a Reply

Your email address will not be published. Required fields are marked *