All verification techniques can be effective given the right scope and applied abstraction. At least that’s the argument I started in Portable Stimulus and Integrated Verification Flows with a graphic that plots the effectiveness of several techniques as a function of scope and abstraction. I have more people agreeing with the idea than disagreeing so I’ve carried on with it. I decided to dig a little deeper into exactly where and how well techniques apply.
For reference, I’ve pasted in the original map with effectiveness plotted for each of unit testing, directed testing, constrained random, portable stimulus and integrated HW/SW testing (reminder that this is partly based on the forward looking assumption that portable stimulus actually becomes a practical mainstream verification technique).
I see scope as the determining factor when choosing the right technique. Given a particular scope, verification engineers can then choose the most effective technique and an appropriate level of abstraction. For example, verifying some low level detail early in development is best done in a unit test or a directed test with interactions modeled at the wire or method level, verifying entire feature sets closer to release could involve complex scenarios modeled with portable stimulus.
I like how this effectiveness map has evolved, but scope is still pretty theoretical the way I have it. A ‘detail’, for example, is an arbitrary name I chose for the lowest level design scope. Likewise for functions, features and the rest. Given all the room for interpretation, I figured breaking down scope into labels that correspond to specific design characteristics and intentions would add some clarity. For that I chose expressions, branches, transitions, interactions, communication, integration, interoperability and normal operation.
As you can see, I didn’t stop there. I saw more meaningful labels for scope would allow us to grade each technique at a finer granularity; I suggest ‘most effective’ down to ‘counterproductive’ for each. With this, verification engineers can look specifically at what they’re verifying – interactions between two design functions for example – and choose the most effective technique – a constrained random test and/or a handful of directed tests.
This detailed scope breakdown should still be treated as an approximation because even though it’s more descriptive, it’s still subjective. It’s also context dependant. But I think linking scope to a set of characteristics, even though they are still subjective, could be quite useful. It gives developers criteria for recognizing (a) ‘sweet spots’ for each technique where they are most effective; (b) transition points where teams should consider moving from one technique to the next; and (c) situations where techniques should be avoided.
To that last point, I think recognizing where techniques become ineffective is even more useful than knowing where they are effective. You can look at this and easily deduce only crazy people would count on unit tests for an entire system (even I wouldn’t do that!) or an integrated HW/SW platform for pin level interaction.
It also makes it easier to assert that no one technique covers the entire verification spectrum. I’m thinking of constrained random here; it’s been the default choice in our industry for a while now. Constrained random may have a sweet spot around transaction level interactions but that effectiveness tapers off quickly as scope widens to the system (inefficient) or narrows to the details (haphazard and unpredictable).
With that, I’ll leave it to you to decide whether or not I’ve scored each technique appropriately. I’m off to think more about how techniques can be used to compliment each other as part of a start to finish verification flow.
PS: I’ve deliberately missed formal here because I personally don’t know enough about it to place it. I suspect it’s near unit testing and directed testing though I’ll leave it up to any formal experts out there who’d like to offer a more informed opinion.