I’m Not Anti-Constrained Random…

I’m anti-constrained random. I would never use constrained random. I think it’s an overhyped technique that doesn’t produce the results we think it does. People would be better off forgetting about it and going back to directed testing.

Ok… a little strong perhaps, but if you’ve read my posts (aka: rants) on constrained random verification, you may be assuming that’s what I think of it; that constrained random is something I’m opposed to and that I’ve turned back the clock to the directed testing stone ages.

Not true.

I’ve been using constrained random for about 11 years now and I continue to use it… just not the way I used to. For the people that see me as anti-constrained random, please believe me when I say I’m not anti-constrained random. It’s just that…

I’m Pro-using the right technique for the job: Constrained random used to be my go-to technique. It was automatic. Before I even knew much about a design, I’d be thinking about how to automate the checking in my environment and how to randomize transactions. Not so anymore. I don’t think using constrained random verification should be assumed and I don’t think constrained random tests should be the first thing we run against a design (not even tightly constrained tests). Directed tests should be first because it’s the technique that best addresses the easy stuff. Only after you’ve gone as far as directed tests can take you and you feel the need to go further do you turn to constrained random; that’s the time where constrained random can become the right technique for the job.

I’m Pro-simplicity and usability: heavyweight “methodologies” like UVM aren’t a requirement of constrained random verification. But I think it’s becoming increasingly rare to see constrained random without a heavyweight “methodology”. That generally means that with a more complex simulation technique comes a lot of “methodology” complexity as well. The good news is that with a little experience, you can choose to use the parts of a “methodology” that make you more productive while leaving the rest on the shelf. For example, you could take a pass on the uvm_sequencer for stimulus but use the uvm_in_order_comparator to automate your checking. Or you could forego use of the uvm_config_db to disperse configuration information in favour of good old hierarchical procedural assignments. In short, don’t let heavyweight “methodologies” dictate how you build a constrained random testbench. Instead, be practical and limit complexity whenever possible.

(For the record… many people advocate for selective “methodology” usage so what I’m suggesting is not new. The bad news, though, is that I don’t think many people are practicing selective “methodology” usage. Admittedly, I have a problem with it myself.)

A caveat to this “pro-simplicity/usability” rule is that because you can’t standardize simplicity and usability, they’re different for everyone. That of course means that simple and usable for one person may be complicated and useless for somebody else. That makes things tough to say the least :).

I’m Pro-early results: taking several weeks to put together a testbench while gaining little or no insight into design quality is unacceptable. Yet with the long development times required to construct a fully functioning constrained random testbench, that’s exactly what happens. Our job is to verify a design, not build a testbench, so that’s where the focus should be. Immediately. I like early directed testing as a way to do away with the results blackout that looms during weeks of testbench development. I recommend that to others as well. If/when you do use constrained random verification, use incremental development of the testbench to collect early results (and avoid leaving yourself and your team in the dark).

I’m Pro-common sense: In the past, I’ve likened constrained random verification to ignorance based exploratory testing where a million monkeys (aka: massive server farms) run constraint sets continuously and repeatedly without really understanding what they’re looking for in hopes squashing bugs in unknown corners of the design universe. It shouldn’t be the case, but constrained random tests and absurdly large coverage models are often used as a substitute for understanding what we’re doing and to avoid prioritizing what’s important. That’s not right. We still need to know what we’re looking for. Constrained random testbenches aren’t seeing eye dogs. If we use them to cross the street without looking both ways first, we’ll probably get flattened.

Put all that together and what do you get? I think you can actually call me pro-constrained random… for now anyway… but only when it’s applied appropriately, where complexity is managed in the interest of usability and there’s a focus is on early results in an environment where common sense prevails. Those are my criteria for a successful constrained random verification strategy and I use them to guide how I use constrained random verification…

…or whether or not I even use it at all!

DISCLAIMER: Opinions subject to – or expected to – change :)!

-neil

2 thoughts on “I’m Not Anti-Constrained Random…

  1. Good post. A set of directed tests will find many major bugs in the early stages of a project, while constrained random will find many of the remaining bugs. On followup projects, the constrained random test generator should need relatively minor fixes to generate tests, while the directed tests would be more of a sanity check.

  2. As I look at what I have seen happen with constrained-random, I agree with a lot of what you say. I also wonder how much of the bugs detected using constrained-random (much later in the design) would have been caught much earlier (and with dramatically less effort) by using unit-testing.

    Do you have any data on root-cause of constrained-random bugs: requirements at system level, gaps between specifications on two blocks that are connected, inadequate (or false-pass) at lower-level testing?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.