SNUG Unit Testing Finale

2014-03-24 14.53.48SNUG Silicon Valley is all wrapped up for another year. I think my talk on tuesday morning went pretty well. Finding the right angle for introducing agile hardware practices has been a real trick for me and this week I felt I took a step forward. For technical practices, TDD is still my goal. For the hardware crowd it seems my UVM-UTest talk focusing on unit testing could be the right path for getting there.

(If you weren’t at SNUG and you have a group that’d be interested in How UVM Makes The Case For Unit Testing, let me know at neil.johnson@agilesoc.com. I’m always happy to repeat past presentations in person or via webex!)

We had some good follow up questions. From Cliff Cummings (who was nice enough to give me the heads-up that he’d ask some tough questions) there was the observation that unit testing was going against the grain; in an industry moving toward techniques like constrained random it seems like unit testing is a step backward. I’ve had others have make similar observations in the past – asserted far more aggressively btw so thank-you Cliff for taking it easy on me – and my response has been that constrained random is good technique that’s being misapplied. Constrained random is great for finding the things we can’t expect but a terribly ineffective technique for finding the issues we do expect. I see unit testing and constrained random as complementing each other: unit test first to clear out the issues you do expect, then transition to constrained random to find the stuff you can’t.

There was also a question from Scott Evans related to the maintenance side of unit testing. How do unit tests for some parent class – say the uvm_object – effect or impact the development of derivatives? My response was that the unit tests for the parent would be forward compatible to any derivatives provided of course a behaviour isn’t changed or overridden. Put another way, you can rely on the existing unit tests to ensure you haven’t inadvertently broken any functionality in the derivative but you’ll need to write new unit tests for new features and/or any features that have changed.

I had a couple questions related to the cost and level of detail. Seems the low level testing gives the impression of higher cost (i.e. a lot more effort required) relative to current practices (it was a long question which I interrupted with an “I object”, then let the fellow carry on 🙂 ). This is another common observation people make and I usually suggest people take a larger view of what’s happening. On the surface, unit testing does look like more work but it’s only fair if you take a long term view to measure the true impact. For me, the up front investment in quality has more than offset the cost of debugging poor quality code that I’m used to seeing. This is an apples-to-oranges comparison that you can’t take at face value. Instead, think about how much time you throw away with debug then make the investment in unit testing to see for yourself.

We also had questions related to where unit testing is applicable. My fault here for not making it clear that it’s a technique that can be applied in both design and verification (I think the fact that I’m a verification engineer with examples of how I unit test testbench components ends up being a bit misleading. I should probably shake things up with a few RTL unit test examples).

Overall, I did sense some skepticism… as I always do… but I also sensed some deep thought and that’s really all I hope to inspire. The rest involves people deciding for themselves whether or not unit testing is a technique that help them write high quality code in the same way it’s helped me. Always encouraging is that a couple people said they were up to downloading SVUnit and giving it a shot. I told people several times that if it takes more than a couple hours to get the hang of it, it’s my failure and not theirs. I stand by that. If you have any problems getting started with SVUnit, I hope you’ll take the opportunity to email me a stern beatdown.

To finish up, here’s a snapshot of my summary slide…

Screen Shot 2014-03-27 at 10.29.57 PM

I posted a version of that on twitter a couple weeks ago and the response was good so I went with it. I used it to suggest that all the exciting innovation that’s happened in the last 10-15 years in functional verification can sometimes be taken too far. The techniques we use now are sophisticated enough that teams are sometimes tempted into turning a corner when it comes to complexity and productivity. As an example, a UVM-based constrained random testbench can become increasingly feature rich (i.e. complex) such that it can actually make a team less productive. This is why techniques like unit testing bring exciting new possibilities; it’s straightforward, accessible to all, it’ll always be productive and most importantly, it addresses a class of defect (i.e. the simple stuff) not well served by constrained random.

That’s it… another great SNUG Silicon Valley.

-neil

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.