I’ve been a verification engineer for a little over 20 years. None of the people around me know what that means. If your friend or significant other is a verification engineer, you might not know what it means either.
I gave up trying to explain “verification engineer” to my parents long ago; they can barely remember companies I’ve worked for nevermind what I did there. Never really tried with my kids. They see me as a weird basement hacker… which is wrong but that’d at least be easier to explain. My wife says I “work with computers” but only to justify asking me all her computer related questions. I tell friends I work in semiconductors. When they look at me confused I tell them I help build computer chips. If that’s still confusing I change the subject.
I’m bad at explaining “verification engineer”. Unless it’s just me, I’d bet your significant other kind of sucks at it, too. It’s hard to give up on it though because verification is an interesting job; also frustrating; sometimes fun; usually satisfying; kind or love/hate… or maybe like/dislike. It’s a job I wish I didn’t have, but can’t think of anything I’d rather be doing.
Anyway, considering I enjoy being a verification engineer about 95% of the time, it’d be nice for my family to at least kind of get it. So I thought I’d take another shot at explaining it to them (and you, if you’re interested).
I know. It’s been a while. I’ve been dealing with something, something big, and it’s taken me a while to come to terms. But after several months, I’m finally in a place where I can talk about it.
I have something incredibly disappointing to report. It crushes me to be the one to put it out there but it’s the right thing to do. We can’t hide from it or pretend it didn’t happen. Facing it head on is the only way. It’s no one’s fault. None of us could have foreseen it. Personally, I was blindsided by it. We were destined for a glorious milestone we could rally around; celebrate even. A new tomorrow of off-road adventures; days spent blazing through untamed wilderness on TRD tuned suspension; sunshine on the horizon, dust in our wake.
Alas. It wasn’t meant to be.
A couple years ago, the 2016 Wilson Research
Group Verification Study – commissioned by Mentor and, of course, lead by Harry
Foster – confirmed a clear trend that goes back to at least 2010: the amount of
time verification engineers spend doing debug has been steadily increasing for
the better part of a decade.
This trend is troubling. To me it’s become an
indication of a fundamental flaw in how we approach verification. Despite our
advancements over the same period, bugs are chewing up a growing portion of our
time. Not good.
But as troubling as it is, in 2016 I recognized
the trend wasn’t all bad because it presented verification engineers with an
Through a simple conversion of percentages into cost of debug as salaries paid, I saw that we were steadily approaching an amount equal to the cost of a brand new vehicle; specifically, the cost of a Toyota 4Runner TRD PRO (with the roof rack and 17-in alloy wheel packages). In 2010, teams were budgeting 77% of the cost of a new Toyota 4Runner on debug. By 2016 – the time of my epiphany – we were at 94%.
Coincidentally, when you extrapolate a data point
for 2018 using the average increase over the previous 6 years you get 1.00.
That’s 100% of the purchase price of a Toyota 4Runner (with the roof rack and
17-in alloy wheel packages) budgeted every year for every verification engineer.
100% is a big number.
To the opportunity… with so much money spent on
debug I think it’s in a company’s best interest to incentivise verification
engineers to reduce the expense. Further, I think a special award should go to
verification engineers able to reduce their debug time to 0. That special award
could be, for example, a Toyota 4Runner TRD PRO (with the roof rack and 17-in
alloy wheel packages).
For those of you thinking this sounds like crazy
talk, think again about what you’re getting: a dramatic improvement in product
quality with a likely reduction in development time from engineers that are
much happier, all at no extra cost.
Not so crazy anymore… except here comes the disappointment.
Before you book a seat on the bandwagon I’ll
remind you that I predicted this based on the 2016 data. Fast-forward to the
2018 and sadly it all falls apart.
Turns out my prediction for 2018 was a little too optimistic. Published earlier this year, the actual 2018 data show that while we’ve done our part – verification teams now budget 44% of their time on debug – Toyota also increased their MSRP. Combined, those increases take us to only 97.2% of a Toyota 4Runner. In other words, we’re $1,363 worth of debug short. Even if we remove the alloy wheel package (the roof rack comes standard on the 2019 TRD PRO), we’re still $113 short. In summary: no 4Runner.
I know. It hurts.
Let’s all hope the 2020 verification survey
brings better news.
As we enter the age of standardized portable stimulus through tool support for PSS 1.0, verification teams will undoubtedly feel the pressure to move into this latest, greatest verification technology. I can certainly feel it. And being new to the tooling along with the crescendo in publicity, I’ve been increasingly curious for more information. I assume I’m not the only one.
As such, it feels like the right time to consider a few obvious entry points and ponder the cost and value of jumping in. Given the possibilities, there’s no doubt how teams invest in portable stimulus and what they get in return will vary substantially.Continue reading →
A hypothetical for design engineers… what if there were an online tool useful for both documenting your RTL and bootstrapping a testbench. Would you use it?
The tool is Wavedrom. It’s an open source tool hosted at wavedrom.com. You may already use it for documentation. I’ve used it in the past for documenting BFM behaviour. It’s accessible, easy to use and the output is clear. Highly recommended.
If you haven’t seen Wavedrom before you should load it up to see what it can do. By default, it comes up with a simple req/ack data transfer to illustrate the basics. The input is JSON which is pretty easy to work with. Output can be exported as PNG or SVG.
If you want to try something from scratch to see what it’d look like, you can paste in this APB write transaction… Continue reading →
Next to unit testing UVM drivers, which was the topic of Testing UVM Drivers (Without The Sequencer), the second most popular question for which I had no good answer has been “How can I use SVUnit to test my UVM sequences?”.
Thankfully, SVMock seems to have made life easier here as well. With SVMock we can isolate a sequence from sequencer and driver such that it can be unit tested on it’s own before it’s used as part of a larger system. Here’s an example of how it works. Continue reading →
If you’ve been following my blog since DVCon earlier this year, you’ll have noticed that the introduction of portable stimulus has me thinking more in terms of integrated verification flows. Specifically, where our verification techniques are best applied and how they complement each other as part of a complete flow.
So. A couple weeks ago I introduced SVMock. It’s a mocking framework for use with SVUnit that makes it easier to isolate, check and control behaviour of Systemverilog classes. Unsurprisingly, the response to that announcement averaged out to tepid. A few people were immediately interested. I’m sure a huge number of people didn’t care, probably because mocking has never been on their unit test radar. Then there were people in the middle who were interested but I didn’t give them enough to get over the great-but-now-what hurdle.
This post is for the last group, the people that reckon SVMock can help them write better unit tests but don’t quite see how. The test subject to get the point across: the uvm_driver. Continue reading →
A few times over the last while people have suggested I add a mocking framework to SVUnit. Took me a while, but I finally got around to it. SVMock version 0.1 is now available on GitHub. I’m actively working on it but the base functionality is there and ready for use.
If you’re new to mocking, in the context of unit testing it’s a technique for replacing dependancies of some unit-under-test to make it easier to test. Mocks increase isolation so you can focus on development without the hassle of including large chunks of code, infrastructure or IP you don’t necessarily care about. Mocks inject new sample points and assertions that make it easier to capture interactions with surrounding code. They also offer increased control by substituting potentially complex dependancy usage models for direct control over interactions. In short, mocking helps you focus on what you care about and ignore the stuff you don’t.
After reaching out to people last week for SVUnit success stories, to my pleasant surprise I found one in my inbox this morning. It was from SVUnit early adopter Manning Aalsma of Intel (formerly of Altera). When I say early adopter, I mean early. Looking back, I found the first email he ever sent me still in my inbox requesting an early version of SVUnit. Timestamp on that was Jan 8, 2012!
I’m happy to have people like Manning using and advocating for the framework and the techniques that go with it. Here’s what he had to say about he and his teammates using SVUnit.
Catching up on a little reading, I came across your post from a week ago or so:
I get that question periodically from folks who are looking into incorporating SVUnit into their verification flow. Of course it’s always phrased a little differently depending on who’s asking – How many users are there? What is the bug rate? What teams have integrated it into their verification flow? Have people published papers about it? Is it actively being developed? Do others contribute to the development? – but the intent behind the question always feels the same. We developers want to know that others have blazed the trail before us, that the tools we’re considering have a proven track record, that the major bugs and issues are long since fixed and that tools are truly ready before we get started with them.
Unfortunately, it’s a tough question to answer. Being that SVUnit is open-source and usage is basically anonymous, unless people reach out to me personally I can’t make any definitive claims.
That said, I’m confident we have enough anecdotal evidence for a solid yes. SVUnit is a legitimate test framework for design and verification engineers looking for an alternative that addresses low level code quality.