A hypothetical for design engineers… what if there were an online tool useful for both documenting your RTL and bootstrapping a testbench. Would you use it?
The tool is Wavedrom. It’s an open source tool hosted at wavedrom.com. You may already use it for documentation. I’ve used it in the past for documenting BFM behaviour. It’s accessible, easy to use and the output is clear. Highly recommended.
If you haven’t seen Wavedrom before you should load it up to see what it can do. By default, it comes up with a simple req/ack data transfer to illustrate the basics. The input is JSON which is pretty easy to work with. Output can be exported as PNG or SVG.
If you want to try something from scratch to see what it’d look like, you can paste in this APB write transaction… Continue reading
Next to unit testing UVM drivers, which was the topic of Testing UVM Drivers (Without The Sequencer), the second most popular question for which I had no good answer has been “How can I use SVUnit to test my UVM sequences?”.
Thankfully, SVMock seems to have made life easier here as well. With SVMock we can isolate a sequence from sequencer and driver such that it can be unit tested on it’s own before it’s used as part of a larger system. Here’s an example of how it works. Continue reading
If you’ve been following my blog since DVCon earlier this year, you’ll have noticed that the introduction of portable stimulus has me thinking more in terms of integrated verification flows. Specifically, where our verification techniques are best applied and how they complement each other as part of a complete flow.
At DAC, I had an opportunity to summarize some of these ideas in a 30min presentation called Building An Integrated Verification Flow. That happened in the Verification Academy booth. Audience was small’ish at the conference, but the good news is all the sessions were recorded. So you can see Building An Integrated Verification Flow posted on the Verification Academy site.
For backstory, here’s a list of the relevant posts since Feb…
Still more to come on this topic of integrated verification flows so stay tuned!
So. A couple weeks ago I introduced SVMock. It’s a mocking framework for use with SVUnit that makes it easier to isolate, check and control behaviour of Systemverilog classes. Unsurprisingly, the response to that announcement averaged out to tepid. A few people were immediately interested. I’m sure a huge number of people didn’t care, probably because mocking has never been on their unit test radar. Then there were people in the middle who were interested but I didn’t give them enough to get over the great-but-now-what hurdle.
This post is for the last group, the people that reckon SVMock can help them write better unit tests but don’t quite see how. The test subject to get the point across: the uvm_driver. Continue reading
A few times over the last while people have suggested I add a mocking framework to SVUnit. Took me a while, but I finally got around to it. SVMock version 0.1 is now available on GitHub. I’m actively working on it but the base functionality is there and ready for use.
If you’re new to mocking, in the context of unit testing it’s a technique for replacing dependancies of some unit-under-test to make it easier to test. Mocks increase isolation so you can focus on development without the hassle of including large chunks of code, infrastructure or IP you don’t necessarily care about. Mocks inject new sample points and assertions that make it easier to capture interactions with surrounding code. They also offer increased control by substituting potentially complex dependancy usage models for direct control over interactions. In short, mocking helps you focus on what you care about and ignore the stuff you don’t.
Now to SVMock… Continue reading
Funny thing happened today.
After reaching out to people last week for SVUnit success stories, to my pleasant surprise I found one in my inbox this morning. It was from SVUnit early adopter Manning Aalsma of Intel (formerly of Altera). When I say early adopter, I mean early. Looking back, I found the first email he ever sent me still in my inbox requesting an early version of SVUnit. Timestamp on that was Jan 8, 2012!
I’m happy to have people like Manning using and advocating for the framework and the techniques that go with it. Here’s what he had to say about he and his teammates using SVUnit.
Catching up on a little reading, I came across your post from a week ago or so:
I thought I’d share my experience so far. Continue reading
Is SVUnit a legit verification framework?
I get that question periodically from folks who are looking into incorporating SVUnit into their verification flow. Of course it’s always phrased a little differently depending on who’s asking – How many users are there? What is the bug rate? What teams have integrated it into their verification flow? Have people published papers about it? Is it actively being developed? Do others contribute to the development? – but the intent behind the question always feels the same. We developers want to know that others have blazed the trail before us, that the tools we’re considering have a proven track record, that the major bugs and issues are long since fixed and that tools are truly ready before we get started with them.
Unfortunately, it’s a tough question to answer. Being that SVUnit is open-source and usage is basically anonymous, unless people reach out to me personally I can’t make any definitive claims.
That said, I’m confident we have enough anecdotal evidence for a solid yes. SVUnit is a legitimate test framework for design and verification engineers looking for an alternative that addresses low level code quality.
Here’s a few stats to support that yes: Continue reading
Verification engineers have a habit of over-engineering testbenches and infrastructure. We can all admit that, no? I can certainly admit it. From first hand experience, I can also admit that the testbenches I’ve over-engineered had a lot of waste build into them; unnecessary features I forced people to use and features that were never used at all. And there’s no good reason for it other than I like building new testbench features. Don’t think I’m alone there.
We’re overdue for a shift in verification. We need to de-emphasize the value of testbench and infrastructure development and re-emphasize the value of delivery. Here’s how we get started. Continue reading
That’s what sets in after we’re done hitting all the easy stuff and move on to targeting the more obscure corners of a coverage model. The pace slows. Progress slows. We have lots of review meetings to debate the merits of coverpoints, some of which we may not even understand. Through trial-and-error we plod along as best we can until someone says whatever we have is good enough (because 100% coverage is impossible). Then we shrug our shoulders, add a few exclusions, write up a few waivers, shake off the hangover and move on.
I’ve had coverage hangover several times. I’m sure we all have. With some devices – the really massive SoCs – there are verification engineers that live through coverage hangover for months at a time. Their only reprieve, if you can call it that, tends to be bug fixing. If they’re lucky, they’ll get to implement a new feature now and then. Otherwise, it’s a cycle of regression, analysis, tweak and repeat.
The worst part of a coverage hangover is that the next hangover is guaranteed to be worse because the next device is always bigger. At least that’s what happens with current verification strategies. I’d like to propose we break common practice with a reset. Regrettably, it won’t change the fact that coverage space will continue to grow. But it will give some relief for the next few generations while folks smarter than me find better ways to define, collect and analyze coverage.
In closing coverage, our focus and timing have always struck me as being out of tune with the reality of the mission. I’m proposing we change that by breaking coverage into a series of steps that we can focus on independently, moving from one type to the next as features mature. Continue reading
In publishing Portable Stimulus And Integrated Verification Flows, where I came up with the graphic that plots different verification techniques as a function of design scope vs. abstraction, I gave myself an opportunity to think more critically about verification than I have in the past. I’ve been recognizing different patterns and habits from my experiences and getting a better feel for how they’ve helped or hindered teams I’ve worked with. My most recent navel gazing has lead to a new and improved view for how design and testbench code matures over time; a way to quickly summarize quality in a way that’s meaningful and useful.
This is the same diagram I published a few weeks ago but with different classifications overlaid. I call them 5 steps in design maturity: broken, sane, functional, mature and usable. In my mind, every feature we create, every chunk of code we write, progresses through these 5 classifications on it’s way to delivery. Continue reading