Functional Verification Doesn’t Have To Be A Sideshow

This is another one of those challenge-the-way-we-think-about-functional-verification posts. The motivation behind it comes from a few different places.

First is a blog posted on eetimes designline by Brian Bailey a couple weeks back called Enough of the sideshows – it’s time for some real advancement in functional verification! In that post, Brian exposes a few techniques – constrained random verification in particular – that have failed to live up to the hype and praise that’s been heaped upon them over the last several years. That’s an industry expert suggesting we re-think the direction we’re going with functional verification tools so if you haven’t read it, I’d suggest doing so. In case you missed it, Brian also inspired the snappy title of this post :).

Second is the work I’ve been putting into SVUnit and test-driven development. I’ve been having some decent success doing TDD w/SVUnit, enough that it’s quickly becoming my favorite design technique when it comes to verification IP and testbench development. Others are using it successfully as well which gives me hope that someday soon TDD will go mainstream in hardware development.

Also motivating this was the recently announced UVM-Express from Mentor. UVM-Express is a step-by-step approach that I really like and have already written about here.

The final bit of motivation, I have to say, is my slight distaste for UVM and our collective struggle to cage the mythical beast called reusable verification IP. Realizing it is becoming industry standard and library of choice for those on the cutting edge I will use UVM… but only under protest.

<protest>UVM is HUGE and it’s complicated. It’s extensible to the Nth degree yet at the same time arbitrarily confining (sure you can add as many wacky runtime phases as you want, in any order that you want, but don’t even think of creating a component after the build phase or connecting a driver to anything but a sequencer. Just don’t). It’s biggest problem is that it sure doesn’t appear it was developed with the majority of its users in mind. Bleeding edge users… perhaps. The majority… not a chance.</protest>

Sorry for pointing that out. A little harsh perhaps because I don’t mind it now that I’m using it… though the ramp-up is incredibly steep. If I’m the only one that feels that way, go ahead and disagree in the comments.

Put all those things together and what do you get? I think that over the last decade we’ve “evolved” into forgetting two critical aspects of testing:

  • early results are important
  • it’s important to use the right tool for the job

Remember the graphic that shows coverage from constrained random verification relative to directed testing? The one that people have used – and unfortunately I have used a time or two – to explain that if we just wait a little longer (aka: spend more time building the testbench), at some point we’ll suddenly get an explosion of positive results; our coverage line will abruptly charge due north and then gently flatten out to 100%.

The flat line at the beginning of the constrained random curve is us convincing ourselves that early results aren’t important.

That same graphic was used to undermine directed testing as a viable technique for making progress. Sure you might use it to verify a few corner cases on your way out of the office the day before RTL freeze, but the big progress is made with constrained random. You’d be crazy not to believe that. Just look at the graphic!

Relegating directed testing to the corner-case-afterthoughts is us forgetting that there’s a right tool for every job.

That’s a pretty bleak assessment of how we’ve faired with cutting edge functional verification tools and techniques over the last decade or so. But I really believe it’s a fair assessment because things have gone off the rails a bit. The good news, though, is that some of the innovation that Brian Bailey is looking for in his article can be done, in my opinion, by rethinking the tools we have as opposed to waiting for what we don’t have. I think that gets done by reasserting the importance of results (i.e. passing tests) as a metric for progress and recognizing there’s a right tool for every job.

Let’s do that by looking at functional verification as a 3 step process.

Step 1: Fix The First Bugs

Basic sanity is a milestone that is so incredibly important for all development teams as it’s that first benchmark for progress. It requires little more than absolute simplicity which is something that teams devoted to constrained random verification have overlooked.

Don’t use a constrained random environment to fix the bugs required to verify basic sanity. Don’t worry about reuse when it comes to basic sanity. And don’t worry about rework. Build the simplest testbench you can with a short list of the simplest (regressionable) directed tests you can think of that’ll verify the simplest function of your DUT. Set a benchmark with basic sanity, then move on to step 2.

If you’re familiar with UVM-Express, think UVM-Express step 1 at this point; a verilog, UVM-less setup that’s perfectly suited to simple, directed tests useful for verification and design engineers.

Step 2: Fix The Bugs You Expect

With only your basic sanity test – and possibly some smoke testing from the designer(s) – it’s pretty safe to assume that the next feature you test will be broken. As will the feature you test after that… and the feature after that… and so on to the end of the feature list. For early code, focus is important. You need to be able to go to a designer and say “this feature isn’t working” and you need directed tests to be able to do that. Directed tests are still the right tool for the job for fixing the bugs you expect; constrained random tests are not.

To fix all the bugs you expect – which in all likelihood will apply to most of the features in your design – you might be thinking isn’t that going to be a lot of directed tests?? You’re right to think that. All those tests may seem like a lot of keyboard bashing but the focus and the short debug cycles are what’ll make this more productive that using constrained random.

Step 3: Fix The Bugs You Don’t Expect

Finally, as you get to testing the unforeseen and the unknown, constrained random tests become the right tool for the job. Here’s where you’re looking for the things you don’t expect; the insidious little bugs that constrained random testing was made for.

There’s three steps to 100% coverage using techniques we already have and applying them at a point where they add the greatest value. Now a few final thoughts related to reuse, reference models and code quality.

I’d recommend not being overly concerned with rework/reuse as you transition from step 1 to steps 2 and 3. I say that because it seems our infatuation with reuse at times pulls us away from the right tool for the job (i.e. why would I do something simple when I can do the same thing with something more complicated that’s poorly suited to what I’m trying to achieve?). And our penchant for extensibility bloats the code with rarely used features. It also adds delay. Reuse is important, but don’t be afraid to add features to the testbench as you need them and rework as necessary (yes… I realize suggesting rework is blasphemous for some but I think it can be extremely productive when properly contained). Incidentally, rework in the software world is called re-factoring and it’s seen as a necessary part of development.

Next is the reference model (and scoreboarding), which is necessary for constrained random tests. I suggest building your reference model incrementally as you build your directed tests in step 2. This kind of bite size progress is more manageable than holing up for a few weeks/months and doing it all in one shot. It’s also a twist on directed testing where the stimulus is directed but the checking is automated. Ideally, when you’re done step 2, your model is done so with your constrained random tests you’re worried about stimulus and coverage but not the checking.

Lastly, constrained random verification is senseless when applied to poor quality code… which unfortunately is what 99% of us write when we code open loop (i.e. without tests). Test-driven development brings the code-and-test feedback loop that’s sorely lacking in hardware development. I’m using TDD and I love it. The quality of the code I used to write compared to what I’m doing now isn’t even comparable. TDD works in software development. It’ll work in hardware development, too. It’s only a matter of time.

You can find more about TDD here.

I’m not sure if Brian had this type of innovation in mind when he wrote Enough of the sideshows…, but I think changing how we use the tools we already have is innovation that’ll take us much further than where we are now.

It’s also innovation that we’re all capable of.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.