A Cure For Our Coverage Hangover

Coverage hangover.

That’s what sets in after we’re done hitting all the easy stuff and move on to targeting the more obscure corners of a coverage model. The pace slows. Progress slows. We have lots of review meetings to debate the merits of coverpoints, some of which we may not even understand. Through trial-and-error we plod along as best we can until someone says whatever we have is good enough (because 100% coverage is impossible). Then we shrug our shoulders, add a few exclusions, write up a few waivers, shake off the hangover and move on.

I’ve had coverage hangover several times. I’m sure we all have. With some devices – the really massive SoCs – there are verification engineers that live through coverage hangover for months at a time. Their only reprieve, if you can call it that, tends to be bug fixing. If they’re lucky, they’ll get to implement a new feature now and then. Otherwise, it’s a cycle of regression, analysis, tweak and repeat.

The worst part of a coverage hangover is that the next hangover is guaranteed to be worse because the next device is always bigger. At least that’s what happens with current verification strategies. I’d like to propose we break common practice with a reset. Regrettably, it won’t change the fact that coverage space will continue to grow. But it will give some relief for the next few generations while folks smarter than me find better ways to define, collect and analyze coverage.

In closing coverage, our focus and timing have always struck me as being out of tune with the reality of the mission. I’m proposing we change that by breaking coverage into a series of steps that we can focus on independently, moving from one type to the next as features mature. Continue reading

5 Steps In Design Maturity

In publishing Portable Stimulus And Integrated Verification Flows, where I came up with the graphic that plots different verification techniques as a function of design scope vs. abstraction, I gave myself an opportunity to think more critically about verification than I have in the past. I’ve been recognizing different patterns and habits from my experiences and getting a better feel for how they’ve helped or hindered teams I’ve worked with. My most recent navel gazing has lead to a new and improved view for how design and testbench code matures over time; a way to quickly summarize quality in a way that’s meaningful and useful.

This is the same diagram I published a few weeks ago but with different classifications overlaid. I call them 5 steps in design maturity: broken, sane, functional, mature and usable. In my mind, every feature we create, every chunk of code we write, progresses through these 5 classifications on it’s way to delivery. Continue reading

Verification Planning From The Top Down

I’ve been writing a lot about integrated verification flows the last several weeks. I’ve had lots about how techniques fit together but I haven’t talked much about what holds everything together. This is where shared development objectives come in. Fundamental to an integrated verification flow is establishing shared objectives within the team. When I say shared objectives, I’m talking about objectives that represent tangible, demonstrable value at an integrated system level that require input across an entire team.

A test plan is a good place to establish shared objectives. In this post, we’ll work out a method for test planning that takes advantage of the dependencies between verification techniques, all for the purpose of creating tangible outcomes. Easy in theory but challenging in practice, it involves bottom-up application of a top-down test plan with the primary motivator being early and continuous delivery of product level outcomes. Continue reading