Does Constrained Random Verification Really Work?

Being that we’re a week away from TDD month on AgileSoC.com, I thought that an appropriate way to get people thinking in a different direction – yes… test-driven development would take us in a different direction – would be to dig up a functional verification article from a couple years ago that I co-wrote. A good part of the article focused on legacy process and it opens by taking a few shots at constrained random verification.

Constrained random verification is pretty mainstream in ASIC and FPGA verification these days, though it does mean different things to different teams. The argument for constrained random verification has always been that it’s a more productive way to cover the state space in increasingly large and complex designs. I used to believe that wholeheartedly. But after seeing and hearing about it fall short – from an efficiency point of view – many times, my current impression of constrained random verification is that it just doesn’t work as well as we all want it to.

In Agile Transformation In Functional Verification – Part I, as you’ll read below, I claim the way we approach constrained random verification results in a process that is fundamentally flawed. While strictly in terms of mechanics it may get us around the state space more efficiently, we’re still pushed to understand the entire state space up front and we end up building test benches in giant steps (as opposed to baby steps which we’ll talk more about in November). Impossible – I say – and getting impossible’er with every new generation of product teams develop. Too big; too complicated.

For more, here’s an excerpt from the aforementioned article that takes aim at bug up front design and constrained random verification:

Big up front design (BUFD) is common in IC development. In BUFD, a team attempts to build detailed plans, processes and documentation before starting development. Features and functional requirements are documented in detail; architectural decisions are made; functional partitioning and implementation details are analyzed; test plans are written; functional verification environments and coverage models are designed.

While BUFD represents a dominant legacy process in IC development, wide adoption of constrained random verification with functional coverage presents a relatively recent and significant shift in IC development. As has been documented countless times, constrained random verification better addresses the exploding state space in current designs. Relative to directed testing, teams are able to verify a larger state space with comparable teams and resources.

But is constrained random verification with functional coverage living up to its potential? Constrained random verification employed with BUFD contains one significant flaw. From a product point view, a design of even moderate complexity is near incomprehensible to a single person or even a team of people; the combination of architecture and implementation details are just too overwhelming. While the technique and associated tools may be adequate for addressing all these details, the human brain is not!

The flaws of constrained random verification and the limitation of the human brain are easy to spot during crunch time, near project end, when the verification team is fully engaged in its quest toward 100% coverage. Because of the random nature of stimulus, it is very difficult for verification engineers to predict progress in the test writing phase of a project. All coverage points are not created equal so the path toward 100% coverage is highly non-linear in terms of time required per coverage item.

To account for the unforeseen limitations in the environment, it is common for verification engineers to rework portions of the environment–or write unexpected and complicated tests–to remove such limitations. This is particularly relevant as the focus of testing moves beyond the low hanging fruit and toward more remote areas of the state space.

Taking the opportunity to rework the environment is rarely accounted for in the project schedule and can cause many small but significant slips in schedule. Ever wonder why tasks sit at 90% complete for so long? It is because those tasks are sucking in work that was not originally accounted for in the first place.

What is truly amazing is not the fact that tasks sit at 90% for so long, it is that it is always a surprise when it happens! This should not be a surprise. It is impossible for people to understand and plan a coverage model that will give you meaningful 100% coverage with BUFD. It is also impossible to comprehend the requirements of the environment and tests, especially when considering the random nature and uncertainty of the stimulus. BUFD will not give a team all the answers; rework of the environment will happen regardless of whether or not the schedule says so!

I’m interested to whether or not others share my skepticism with respect to constrained random verification. Has it lived up to it’s potential? Is constrained random as effective as we think? Could we be doing better?

Feel free to offer opinions… especially off-the-wall opinions and/or those that contradict my own :).

Fire away!

-neil

5 thoughts on “Does Constrained Random Verification Really Work?

  1. On one project, we built random controllers constrain-able from the command line, then a script to permute attributes (like here), and built hundreds of well-named single transaction and multiple transaction tests with each transaction permutation.

    We constrained the permutations to the features we were supporting first, then relaxed those constraints as the project progressed, adding more tests.

    The predictability of what was covered in each test allowed everyone to prioritize development, and the tight constraints made sure the random engine could cover all of the attribute permutations long before functional coverage was online. In this way, we developed and tested one feature at a time in the design, testbench, and stimulus semi-simultaneously. It was inexpensive because we reused everything we built and learned for heavy, system level, random testing later. Almost all of the basic and advanced our tests were used across several environments, IPs and projects because the random engine handled portability concerns.

    It doesn’t have to be an either-or situation. If you build your random engine right, you can leverage it for detailed testing, improving it for later, less constrained testing.

    I don’t think random simulation implies BUFD, but I guess it depends a little on how we define “random simulation”.

    Thanks for the post!

  2. I too believe that constrained random does not necessarily imply BUFD. We are developing our FPGA product as a sequence of stories from a backlog using scrum. For each of those stories we are also developing the test bench and using the test bench to exercise the design to demonstrate that the story can meet its acceptance criteria. We use coverage points to measure whether or not we hit the assorted acceptance criteria for the story using constrained random stimulus.

    For the purpose of demonstrating the story, we massively constrain the stimulus so that it essentially becomes directed. When the story is declared to be “done”, we then loosen the constraints for broader stimulus that runs every night for ongoing regression. Note that the automated regressions also advance the randomization seed so that we get parametrically similar, but not identical, test stimulus every night (man that sounds pretentious).

    So CR verification is not incompatible with agile, and if you are smart it can be very compatible.

    Also note that there are other emerging, so called “intelligent” random stimulus methods, that on paper at least achieve higher coverage in less time. Time will tell if these pan out.

    All that being said, constrained random and a large “black box” test bench do have some drawbacks. At times designers need a test that can execute very quickly in a predictable way. Even a simulation time of 15 minutes would be considered to be too long for their immediate purpose. This is where TDD can come into play and we are experimenting with this having recently taken some TDD training. TDD will allow designers to contribute to the overall verification objective in a way that is also compatible with their own needs. And if you do TDD correctly, you can end up with a fully automated test suite that can complement the constrained random test bench quite nicely.

    We are in the early days of experimenting with TDD and we can see that it has promise. We need to do some further work to determine if we can integrate TDD into our practices and fully realize its potential. We have high hopes, but only time will tell. And if we are not successful, it does not mean that others could not be.

    As an industry we need to keep an open mind regarding verification technologies and approaches. Dogma will not serve us well. Empirical results should serve us better.

    Alan

  3. Tommy, Alan,

    Thanks for the comments. Very good points made by both of you. The one that sticks out to me is the “walk before you run” approach you both mention, starting out highly constrained then relaxing constraints as you gain confidence in the design/test bench. A question back to both of you, do you build the entire test bench, then start with the highly constrained tests or do the highly constrained tests rely on a partially implemented test bench?

    Alan, it’s also nice to hear you’re experimenting with TDD. I’d appreciate any commentary on the material we lay out in November. Similar to you, I feel like I’ve finally experienced the potential – albeit on a small scale – and have similarly high hopes.

    neil

  4. Hi Neil,

    to elaborate on our TDD work a bit … TDD does not preclude the use of constrained random. In our early experiments we have been using a mix of directed and constrained random stimulus in the unit micro tests that come from the TDD method. Also TDD does not only apply to the design code. The code for the test bench that will be used for more exhaustive integration testing can also be developed (and verified) using TDD.

    It is a powerful concept …

    Cheers … Aan

  5. And finally to answer Neil’s question back to me. No we don’t build the whole test bench before we start executing tests. As described in my first comment the design and test bench are built together, story by story. The chosen story is based on a business value ranking from the Product Backlog. Both the design and the test bench are built incrementally. We only really need to understand detailed requirements for the story currently in progress. In practice, however, we have a view of the requirements looking a little into the future (maybe enough for one or two sprints or iteration). There is no need for BUFD and BUFRM (Big Up Front Requirements Management).

    I am looking forward to the TDD material coming next month.

Leave a Reply to Alan Dunne Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.