Expect New Tests To Pass

I generally consider myself too practical – aka: skeptical – to think that one small change in mindset is enough to motivate major change in performance. But this is one that I think deserves attention because the potential outcome could be significant.

I think we should expect new tests to pass.

First off, I’ll clarify that a ‘test’ is an acceptance test from the traditional design/verification split: a design engineer writes some RTL, a verification engineer writes the testbench and test, then somebody runs that test for a result. A ‘new test’ is one that has never been run before. To ‘pass’ means the feature that the new test is targeting works as intended.

With that, I’ll reiterate: I think we should expect new tests to pass.

Obviously, the first criteria in producing a passing test is that the feature being targeted is actually implemented in the design. Likewise, the testbench infrastructure required to target that feature has to be implemented. I say ‘obviously’ here because you ‘obviously’ can’t have a passing test without both of these things. But in situations where design and verification work in a bubble relative to each other – a disconnect that is all too common in hardware development – it may not be so obvious. Knowing what the guy/gal in the cube across the hall has done can be a grey area at times, even dark grey… bordering on charcoal.

My test failed. I wonder if <feature> is even implemented yet?

The second criteria pertains to quality. How often do you write code expecting it to work on the first shot? If we average the response to that question over the entire hardware industry I’m guessing we’d get an answer somewhere south of never! None of us expects to integrate our code and have it work first shot. Not surprising since, overwhelmingly, hardware developers write code open-loop with few or no quality checkpoints. Best case is design engineers running new RTL through a basic smoke test. Not so on the verification side where engineers rarely do more than confirm testbench code compiles. Our coding practices (or lack thereof) all but guarantee new design and testbench features are integrated with major functional defects. The only credible expectation is that new tests will fail so we shrug it off. And they can fail because the design is flawed, the testbench is flawed, or more than likely both are flawed. It’s not until code matures that we start hoping for anything more.

My test failed. I wonder if the design is broken? Or maybe it’s the testbench?

In short, the way work means new tests are little more than an opportunity to exercise broken code. I find that less than inspiring, hence…

I think we should expect new tests to pass.

Expecting new tests to pass is a complete 180 relative to where we are now and the basis for motivating a more productive approach to design and verification. New tests should be treated as an opportunity to confirm some feature works as intended, not used to simply kickstart the debug cycle.

To get there communication is key. A passing test starts in an environment where regular interaction between design and verification is the norm. We burst the bubbles and bring design and verification engineers together (I mean physically bring them together, like sitting together, or at least within earshot). They talk frequently with complete transparency into what’s done and matching priorities when it comes to what isn’t done.

Next, expecting tests to pass creates a strong feeling of responsibility when it comes to code quality. That responsibility will lead to more productive coding practices. Some, like TDD and pair programming, we can borrow from software. Others I’m sure we’ll create on our own. Whatever we use must be directly linked to producing defect free design and testbench code because we’ll all want our code to work, first shot.

For smart people, we’ve set the bar incredibly low when it comes new design and testbench code. I think it’s worth our time to aim for something a little better. Maybe a simple change in mindset is all it’ll take…?

-neil

2 thoughts on “Expect New Tests To Pass

  1. I’m a little confused here bringing up TDD as a way to make sure that initially tests should pass first, since as I understand it, TDD wants you to write failing tests before you even begin any feature implementation.

    1. ha. now that you mention it, I can see how this would be a bit confusing. here I’m talking about our normal rtl (product) tests. when we run them, we should be confident that we’ve got everything in place and everything correct to the point we expect rtl and testbench to support what’s required by the test.

Leave a Reply to nosnhojn Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.