What’s A Verification Engineer?

I’ve been a verification engineer for a little over 20 years. None of the people around me know what that means. If your friend or significant other is a verification engineer, you might not know what it means either.

I gave up trying to explain “verification engineer” to my parents long ago; they can barely remember companies I’ve worked for nevermind what I did there. Never really tried with my kids. They see me as a weird basement hacker… which is wrong but that’d at least be easier to explain. My wife says I “work with computers” but only to justify asking me all her computer related questions. I tell friends I work in semiconductors. When they look at me confused I tell them I help build computer chips. If that’s still confusing I change the subject.

I’m bad at explaining “verification engineer”. Unless it’s just me, I’d bet your significant other kind of sucks at it, too. It’s hard to give up on it though because verification is an interesting job; also frustrating; sometimes fun; usually satisfying; kind or love/hate… or maybe like/dislike. It’s a job I wish I didn’t have, but can’t think of anything I’d rather be doing.

Anyway, considering I enjoy being a verification engineer about 95% of the time, it’d be nice for my family to at least kind of get it. So I thought I’d take another shot at explaining it to them (and you, if you’re interested).

Hopefully this helps… Continue reading

Why Simulation Code Coverage and Formal Code Coverage are the Same

For a seemingly straightforward technology that’s almost as old as dirt, the concept of code coverage is still kind of confusing. When you ask people what code coverage means, or better, what value it brings to FPGA/ASIC development, you’ll get answers going in a lot of different directions. Comments start at useless and run all the way up to fundamental when it comes to factoring code coverage results into sign-off decisions. And reasoning for why code coverage is both useless and fundamental comes in several shades of grey. It starts feeling like that thing we all assume we understand… but it’s rare to find someone who can explain it in a way we all agree with.

Admittedly, I expect this post to be a tad controversial so today will not be the day for consensus 🙂. Continue reading

Last Day @ SVUnit

At first I thought this would be sad. Then I wasn’t sad. Now I am a bit sad… but happy. Also excited. After waffling for about a year, I’ve finally decided to pass the SVUnit torch. Tudor Timisescu will replace me as lead developer.

Looking back, the time I put into developing and supporting SVUnit was the best career move I’ve made. Easily. By about a mile. The friends I’ve made throughout the industry, many of the clients I worked with as a consultant and the job I have now were all made possible by working on SVUnit; the Wednesday nights I put in at the coffee shop down the hill cranking out code, examples, presentations and blog posts through a steady stream of Pantera, System of a Down and Metallica. Funny how a laptop and loud music became my peace and quiet the way it did. But it did. And it lasted about 7 and a half years; from before the v0.1 release on Nov 22, 2011 when I took the hand-off from Bryan Morris right up to mid-summer 2018. It was mid-2018 when I found myself mentally winding down from the development and support; still a user but I’d lost a bit of the motivation for helping SVUnit roll forward.

Sadness comes from walking away. But the excitement comes from the fact Tudor is moving in with a lot of the motivation I had in the early days – probably more, really. He’s been a long time user who I’ve heard from and chatted with since Dec 2, 2013. He’s got a lot of good ideas for how the framework can evolve and the gumption to make it happen, not only with SVUnit but also with SVMock.

So finally. After a year in the making, please welcome Tudor Timisescu! To everyone who’s used, supported and/or advocated for SVUnit the last 9 years, please join me in continuing to use, support and/or advocate with Tudor at the helm! Follow him on his blog VerificationGentleman.com for more.

-neil

Better Luck Next Time

I know. It’s been a while. I’ve been dealing with something, something big, and it’s taken me a while to come to terms. But after several months, I’m finally in a place where I can talk about it.

I have something incredibly disappointing to report. It crushes me to be the one to put it out there but it’s the right thing to do. We can’t hide from it or pretend it didn’t happen. Facing it head on is the only way. It’s no one’s fault. None of us could have foreseen it. Personally, I was blindsided by it. We were destined for a glorious milestone we could rally around; celebrate even. A new tomorrow of off-road adventures; days spent blazing through untamed wilderness on TRD tuned suspension; sunshine on the horizon, dust in our wake.

Alas. It wasn’t meant to be.

A couple years ago, the 2016 Wilson Research Group Verification Study – commissioned by Mentor and, of course, lead by Harry Foster – confirmed a clear trend that goes back to at least 2010: the amount of time verification engineers spend doing debug has been steadily increasing for the better part of a decade.

This trend is troubling. To me it’s become an indication of a fundamental flaw in how we approach verification. Despite our advancements over the same period, bugs are chewing up a growing portion of our time. Not good.

But as troubling as it is, in 2016 I recognized the trend wasn’t all bad because it presented verification engineers with an incredible opportunity.

Through a simple conversion of percentages into cost of debug as salaries paid, I saw that we were steadily approaching an amount equal to the cost of a brand new vehicle; specifically, the cost of a Toyota 4Runner TRD PRO (with the roof rack and 17-in alloy wheel packages). In 2010, teams were budgeting 77% of the cost of a new Toyota 4Runner on debug. By 2016 – the time of my epiphany – we were at 94%.

Coincidentally, when you extrapolate a data point for 2018 using the average increase over the previous 6 years you get 1.00. That’s 100% of the purchase price of a Toyota 4Runner (with the roof rack and 17-in alloy wheel packages) budgeted every year for every verification engineer.

100% is a big number.

To the opportunity… with so much money spent on debug I think it’s in a company’s best interest to incentivise verification engineers to reduce the expense. Further, I think a special award should go to verification engineers able to reduce their debug time to 0. That special award could be, for example, a Toyota 4Runner TRD PRO (with the roof rack and 17-in alloy wheel packages).

For those of you thinking this sounds like crazy talk, think again about what you’re getting: a dramatic improvement in product quality with a likely reduction in development time from engineers that are much happier, all at no extra cost.

Not so crazy anymore… except here comes the disappointment.

Before you book a seat on the bandwagon I’ll remind you that I predicted this based on the 2016 data. Fast-forward to the 2018 and sadly it all falls apart.

Turns out my prediction for 2018 was a little too optimistic. Published earlier this year, the actual 2018 data show that while we’ve done our part – verification teams now budget 44% of their time on debug – Toyota also increased their MSRP. Combined, those increases take us to only 97.2% of a Toyota 4Runner. In other words, we’re $1,363 worth of debug short. Even if we remove the alloy wheel package (the roof rack comes standard on the 2019 TRD PRO), we’re still $113 short. In summary: no 4Runner.

I know. It hurts.

Let’s all hope the 2020 verification survey brings better news.

-neil

The Tool That Was And Wasn’t

Glenn and his wife Libby love spending time in their backyard garden. It’s their warm weather obsession, stretching fence to fence to fence.

This was a good year for the garden with perfect growing conditions through the summer months. Libby had the back half overflowing with onions, squash, beets, carrots, a sprawling pumpkin vine and several hills of potatoes. Beanstalk wound its way up a long wooden trellis on the left, peas climbed another to the right. For Glenn, closer to the house, it was all about radishes. Glenn loves radishes. He plants rows and rows of them every year. He spends his days with his radishes. Fresh out of the ground, Glenn can’t get enough of them.

Glenn and Libby were tending their garden on a sunny Monday afternoon when a man in a golf shirt and khakis came bounding through their yard. The man’s eyes lit up upon seeing Glenn in the midst of his radishes. He leapt right up to Glenn, introduced himself with a firm handshake and started preaching the benefits of a new gardening tool called a tiblosherater. He claimed it would thoroughly revolutionize the way people tiblosherate radishes. Continue reading

A Portable Stimulus Public Service Announcement

As we enter the age of standardized portable stimulus through tool support for PSS 1.0, verification teams will undoubtedly feel the pressure to move into this latest, greatest verification technology. I can certainly feel it. And being new to the tooling along with the crescendo in publicity, I’ve been increasingly curious for more information. I assume I’m not the only one.

As such, it feels like the right time to consider a few obvious entry points and ponder the cost and value of jumping in. Given the possibilities, there’s no doubt how teams invest in portable stimulus and what they get in return will vary substantially. Continue reading

A Love/Loveless Relationship With Documentation

Try announcing documentation isn’t important in semiconductor development. You’ll be outed as a heretic! Ask people to write it and you’ll hear a sigh. Ask for an example of decent, truly useful documentation and people struggle to find it. To put it mildly, in semiconductor development we’ve got a love/loveless relationship with documentation. We all agree it’s critical but we don’t like writing it. That and we kind of suck at it anyway.

But we can change that. By putting more thought into who we’re creating documentation for and some friendly reminders around how we create it, we can end up with documentation we’re proud of. Continue reading

Turning Documentation Into (Unit Testing) Action

A hypothetical for design engineers… what if there were an online tool useful for both documenting your RTL and bootstrapping a testbench. Would you use it?

The tool is Wavedrom. It’s an open source tool hosted at wavedrom.com. You may already use it for documentation. I’ve used it in the past for documenting BFM behaviour. It’s accessible, easy to use and the output is clear. Highly recommended.

If you haven’t seen Wavedrom before you should load it up to see what it can do. By default, it comes up with a simple req/ack data transfer to illustrate the basics. The input is JSON which is pretty easy to work with. Output can be exported as PNG or SVG.

If you want to try something from scratch to see what it’d look like, you can paste in this APB write transaction… Continue reading

Unit Testing UVM Sequences

Next to unit testing UVM drivers, which was the topic of Testing UVM Drivers (Without The Sequencer), the second most popular question for which I had no good answer has been “How can I use SVUnit to test my UVM sequences?”.

Thankfully, SVMock seems to have made life easier here as well. With SVMock we can isolate a sequence from sequencer and driver such that it can be unit tested on it’s own before it’s used as part of a larger system. Here’s an example of how it works. Continue reading

Building An Integrated Verification Flow

If you’ve been following my blog since DVCon earlier this year, you’ll have noticed that the introduction of portable stimulus has me thinking more in terms of integrated verification flows. Specifically, where our verification techniques are best applied and how they complement each other as part of a complete flow.

At DAC, I had an opportunity to summarize some of these ideas in a 30min presentation called Building An Integrated Verification Flow. That happened in the Verification Academy booth. Audience was small’ish at the conference, but the good news is all the sessions were recorded. So you can see Building An Integrated Verification Flow posted on the Verification Academy site.

For backstory, here’s a list of the relevant posts since Feb…

Still more to come on this topic of integrated verification flows so stay tuned!

-neil