Balancing Verification Development With Delivery

Verification engineers have a habit of over-engineering testbenches and infrastructure. We can all admit that, no? I can certainly admit it. From first hand experience, I can also admit that the testbenches I’ve over-engineered had a lot of waste build into them; unnecessary features I forced people to use and features that were never used at all. And there’s no good reason for it other than I like building new testbench features. Don’t think I’m alone there.

We’re overdue for a shift in verification. We need to de-emphasize the value of testbench and infrastructure development and re-emphasize the value of delivery. Here’s how we get started.


Reframe The Value Of Verification

Verification as a specialization started about 20 years ago with the rise of constrained random in Vera and Specman. It was an object-oriented (r)evolution of sorts that grew base class libraries, testbench architects, large self-checking testbenches, randomized stimulus and functional coverage. As more engineers bought in and momentum grew, we started comparing the complexity of testbenches to the complexity of design. The next natural step was to associate complexity with value and thereby claim the value of verification was on par with design.

It hurts to say this, but creating the perception that verification and design are comparably valuable has been a massive mistake on our part. Verification is important, but it will never be as important as design. Why do I say that?

While verification is a means to an end; design is the actual end.

The first step in balancing development and delivery is reframing the value of verification. We need to stop perceiving our worth relative to testbench complexity and start measuring relative to design completion. That means turning “look how cool my testbench is” into “look how quickly I verified this design feature”.

Take Pride In Doing The Minimum

This is a difficult attitude to adopt for technical people that like building cool solutions, but taking a minimalist approach to testbench infrastructure is a great way to ensure high value for the effort. My idea of minimalist means infrastructure constantly in a state where it supports current design and test requirements but no more than that. This to me is the definition of balance when it comes to development and delivery; every ‘next test’ motivates development of supporting infrastructure, be it stimulus, checking, coverage and/or whatever else is necessary.

Admit Tests Aren’t That Bad

Reinforcing our complexity-based value metrics is the idea that test writing is inefficient, new grad grunt work to be mocked and marginalized. I remember that attitude taking hold as a result of early constrained random propaganda (i.e. “Imagine having one constrained random test to replace all these directed tests!”).

I think we all understand this by now but it’s worth stating anyway: the UVM sequences we’ve been writing are actually tests. Granted, they’re portable the way our old directed tests weren’t, but they’re also confusing the way our old directed tests weren’t and lead to a lot of sequence specific infrastructure (i.e. the stuff that used to be in the old directed test but is now scattered thoughout the testbench).

The good news with respect to test writing is that I’ve seen several teams turn the corner on this. Realizing their ineffectiveness, not many people consider early shoot-for-the-moon constrained random tests anymore. Instead, we start with tests that are heavily constrained. But we’ve also slipped into a mode where the amount of sequence-related infrastructure teams produce rivals checking and coverage infrastructure. I’d like to see us scale back on the sequence infrastructure. I think we do that by refining test formats that make test intent is less opaque (i.e. more functionality deliberately written into a test and less functionality hidden elsewhere). We can also re-accept test writing as important work and a productive use of time.

Develop For Users, Not For Yourself

It’s becoming increasingly rare to have verification engineers working in a bubble. Far more common is to work in teams of 2 or 3, maybe more. And even if you are on your own, with our industry’s emphasis on testbench portability it’s all but guaranteed others will be using your testbench in one way or another.

I find the most satisfying feeling I get at work happens when a colleague looks at my testbench or VIP, understands what it does and uses it without my help. I find that to be an absolutely outstanding feeling. That only ever happens if I give colleagues features they expect wrapped in a usage model that’s easy to understand (i.e. I write my code for others, not for myself). It’s guaranteed not to happen when I get cute with my code or over-engineer a solution by mixing in unnecessary features and switches. The minimalist approach to developing infrastructure combined with simple usage models – usage models meant for users – is the right way to ensure you’re supporting infrastructure that’s necessary while avoiding the stuff that isn’t.


It’s time for better balance between development and delivery. In all honesty, I think I’m ready for a world where verification engineers don’t even use the word testbench, where everything we do is quoted in terms of design completion and delivery. That’d be a nice world to live in.

-neil

2 thoughts on “Balancing Verification Development With Delivery

  1. Sometimes planning for reusability just invents more work. I have seen cases where horizontal reusability has resulted in man many months of maintenance of code across test benches whereas it would have been easier and less maintenance if we did not reuse code. Reusability as a methodology should not be abused and needs judgment.

Leave a Reply to BEBICK NOBEL Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.