Testing UVM Drivers (Without The Sequencer)

So. A couple weeks ago I introduced SVMock. It’s a mocking framework for use with SVUnit that makes it easier to isolate, check and control behaviour of Systemverilog classes. Unsurprisingly, the response to that announcement averaged out to tepid. A few people were immediately interested. I’m sure a huge number of people didn’t care, probably because mocking has never been on their unit test radar. Then there were people in the middle who were interested but I didn’t give them enough to get over the great-but-now-what hurdle.

This post is for the last group, the people that reckon SVMock can help them write better unit tests but don’t quite see how. The test subject to get the point across: the uvm_driver. Continue reading

SVMock Version 0.1

A few times over the last while people have suggested I add a mocking framework to SVUnit. Took me a while, but I finally got around to it. SVMock version 0.1 is now available on GitHub. I’m actively working on it but the base functionality is there and ready for use.

If you’re new to mocking, in the context of unit testing it’s a technique for replacing dependancies of some unit-under-test to make it easier to test. Mocks increase isolation so you can focus on development without the hassle of including large chunks of code, infrastructure or IP you don’t necessarily care about. Mocks inject new sample points and assertions that make it easier to capture interactions with surrounding code. They also offer increased control by substituting potentially complex dependancy usage models for direct control over interactions. In short, mocking helps you focus on what you care about and ignore the stuff you don’t.

Now to SVMock… Continue reading

Lessons Learned From An SVUnit Early Adopter

Funny thing happened today.

After reaching out to people last week for SVUnit success stories, to my pleasant surprise I found one in my inbox this morning. It was from SVUnit early adopter Manning Aalsma of Intel (formerly of Altera). When I say early adopter, I mean early. Looking back, I found the first email he ever sent me still in my inbox requesting an early version of SVUnit. Timestamp on that was Jan 8, 2012!

I’m happy to have people like Manning using and advocating for the framework and the techniques that go with it. Here’s what he had to say about he and his teammates using SVUnit.

Thanks Manning!

-neil


Hi Neil,

Catching up on a little reading, I came across your post from a week ago or so:

http://agilesoc.com/2018/06/11/is-svunit-a-legitimate-verification-framework/

I thought I’d share my experience so far. Continue reading

Is SVUnit A Legitimate Verification Framework?

Is SVUnit a legit verification framework?

I get that question periodically from folks who are looking into incorporating SVUnit into their verification flow. Of course it’s always phrased a little differently depending on who’s asking – How many users are there? What is the bug rate? What teams have integrated it into their verification flow? Have people published papers about it? Is it actively being developed? Do others contribute to the development? – but the intent behind the question always feels the same. We developers want to know that others have blazed the trail before us, that the tools we’re considering have a proven track record, that the major bugs and issues are long since fixed and that tools are truly ready before we get started with them.

Unfortunately, it’s a tough question to answer. Being that SVUnit is open-source and usage is basically anonymous, unless people reach out to me personally I can’t make any definitive claims.

That said, I’m confident we have enough anecdotal evidence for a solid yes. SVUnit is a legitimate test framework for design and verification engineers looking for an alternative that addresses low level code quality.

Here’s a few stats to support that yes: Continue reading

Balancing Verification Development With Delivery

Verification engineers have a habit of over-engineering testbenches and infrastructure. We can all admit that, no? I can certainly admit it. From first hand experience, I can also admit that the testbenches I’ve over-engineered had a lot of waste build into them; unnecessary features I forced people to use and features that were never used at all. And there’s no good reason for it other than I like building new testbench features. Don’t think I’m alone there.

We’re overdue for a shift in verification. We need to de-emphasize the value of testbench and infrastructure development and re-emphasize the value of delivery. Here’s how we get started. Continue reading

A Cure For Our Coverage Hangover

Coverage hangover.

That’s what sets in after we’re done hitting all the easy stuff and move on to targeting the more obscure corners of a coverage model. The pace slows. Progress slows. We have lots of review meetings to debate the merits of coverpoints, some of which we may not even understand. Through trial-and-error we plod along as best we can until someone says whatever we have is good enough (because 100% coverage is impossible). Then we shrug our shoulders, add a few exclusions, write up a few waivers, shake off the hangover and move on.

I’ve had coverage hangover several times. I’m sure we all have. With some devices – the really massive SoCs – there are verification engineers that live through coverage hangover for months at a time. Their only reprieve, if you can call it that, tends to be bug fixing. If they’re lucky, they’ll get to implement a new feature now and then. Otherwise, it’s a cycle of regression, analysis, tweak and repeat.

The worst part of a coverage hangover is that the next hangover is guaranteed to be worse because the next device is always bigger. At least that’s what happens with current verification strategies. I’d like to propose we break common practice with a reset. Regrettably, it won’t change the fact that coverage space will continue to grow. But it will give some relief for the next few generations while folks smarter than me find better ways to define, collect and analyze coverage.

In closing coverage, our focus and timing have always struck me as being out of tune with the reality of the mission. I’m proposing we change that by breaking coverage into a series of steps that we can focus on independently, moving from one type to the next as features mature. Continue reading

5 Steps In Design Maturity

In publishing Portable Stimulus And Integrated Verification Flows, where I came up with the graphic that plots different verification techniques as a function of design scope vs. abstraction, I gave myself an opportunity to think more critically about verification than I have in the past. I’ve been recognizing different patterns and habits from my experiences and getting a better feel for how they’ve helped or hindered teams I’ve worked with. My most recent navel gazing has lead to a new and improved view for how design and testbench code matures over time; a way to quickly summarize quality in a way that’s meaningful and useful.

This is the same diagram I published a few weeks ago but with different classifications overlaid. I call them 5 steps in design maturity: broken, sane, functional, mature and usable. In my mind, every feature we create, every chunk of code we write, progresses through these 5 classifications on it’s way to delivery. Continue reading

Verification Planning From The Top Down

I’ve been writing a lot about integrated verification flows the last several weeks. I’ve had lots about how techniques fit together but I haven’t talked much about what holds everything together. This is where shared development objectives come in. Fundamental to an integrated verification flow is establishing shared objectives within the team. When I say shared objectives, I’m talking about objectives that represent tangible, demonstrable value at an integrated system level that require input across an entire team.

A test plan is a good place to establish shared objectives. In this post, we’ll work out a method for test planning that takes advantage of the dependencies between verification techniques, all for the purpose of creating tangible outcomes. Easy in theory but challenging in practice, it involves bottom-up application of a top-down test plan with the primary motivator being early and continuous delivery of product level outcomes. Continue reading

Verification That Flows

Writing about portable stimulus has created some neat opportunities for me to swap ideas with others regarding how it fits into our verification paradigm. I like that. From a theoretical standpoint, I think I’ve heard enough to confirm my thought that it’s sweet spot is verifying feature sets with complex scenarios.

Trickier is to move beyond the theory toward recommendations for how it all fits together. By ‘all’ I mean more than just portable stimulus. I mean how all our verification techniques fit together in a complementary way as part of a start to finish verification flow. This is a first attempt at qualifying what ‘all’ looks like. I’ll go through each verification technique, comment on it’s purpose, how teams transition to/from it and discuss how it feeds subsequent verification activity. Continue reading

A Verification Where And How

All verification techniques can be effective given the right scope and applied abstraction. At least that’s the argument I started in Portable Stimulus and Integrated Verification Flows with a graphic that plots the effectiveness of several techniques as a function of scope and abstraction. I have more people agreeing with the idea than disagreeing so I’ve carried on with it. I decided to dig a little deeper into exactly where and how well techniques apply. Continue reading