I don’t normally need much motivation to try something new on AgileSoC.com. So at the first sign people were interested in the idea of adding a forum to help facilitate discussion between hardware and software developers, I went ahead with it. As of this morning, you can now post topics to AgileSoC forums.
To start, we’ve have 3 areas for people to contribute. First is the general category. Got a question or comment for other AgileSoC.com followers? That’s where you’ll want to post it.
Next, we’ve got the Hardware Developer Seeking Agile Software Guidance and Mentoring. In case the title isn’t obvious enough, here’s what that’s all about…
And then the big one, Agile Software Developers With the Time And Patience to Help Out, where I sincerely hope we find some agile software folks courageous enough and patient enough to lend a hand to a hardware developer in need of a little inspiration.
There they are… the new AgileSoC forums. Let the discussion begin!
This may be a good idea… it also may be a terrible idea. Either way it’s worth a shot. You’ve heard of take your kid to work? Well, this is kind of like that except the kid is a hardware developer. Continue reading
How many times have you sat through a conference talk or tutorial delivered by an AE from one of the big three EDA companies, the speaker announces a new version of their simulator, and the first follow-up question from the audience is how much faster is this version? 2x… 10x… 100x… how much faster?
Of course the response is always an increase in speed but it’s also always hard to quantify. It comes with caveats (i.e. “this is what we’ve seen… you may see different”) and no one is ever quite sure what kind of code is being used in the benchmarking. I’ve never had a simulation runtime improve from 10 seconds to 1 second with a new simulator version – maybe other people have seen that, but I haven’t – so I’ve never paid much attention to the speedup numbers. Call me skeptical. Continue reading
So the votes are in. 31 in yes column and 2 in the no column. That means I’ve got 2 days to put together an abstract for DVCon. In case you’ve missed it, the title is How UVM Makes the Case for Unit Testing. It’s based on the work that a colleague and I did unit testing UVM-1.1d with the project UVM-UTest. I’ve got the abstract posted here. If you’re interested in following along as I put it together, you can take a look over here. That’s a googledoc. I have the permissions set such that anyone can leave comments so feel free to tune in and comment as I go.
If you saw last week’s post you’ll know that I’m pondering a DVCon proposal. As it stands, my proposal centers around unit testing in hardware development. Trouble is, unit testing on it’s own – nevermind TDD which is where I really want to go – has been a tough sell for hardware developers. Lucky for me, I’ve recently found that if I show off unit testing within a context people actually care about (i.e. UVM), suddenly they take an interest. Odd how that happens :). So that’s where I am, an abstract that describes the value of unit testing as we’ve applied it to UVM. It’s called How UVM-1.1d Makes the Case for Unit Testing. The proposal is not a sure thing yet. If you’re on board with it, you can help make it happen over here.
Assuming people like the abstract, I’m looking for an ice breaker or two that in addition to what we’ve done with UVM help convey the value of unit testing. That’s taken me back again to Mentor’s 2012 Wilson Research Group Functional Verification Study and a twist on the time wasted debugging graphic that I love so much. Here’s a snapshot with the graphic and Harry Foster’s analysis from Mentor’s website. I’ve shown the graphic so many times already. Let’s focus on the analysis this time, shall we? Continue reading
DVCon call for papers hit my inbox last week. This time around, to possibly save myself the effort, I figured it might be more productive to let other people decide whether or not mine is a topic they’d like to see at DVCon. I’ve got a draft title and abstract so far…
How UVM-1.1d Makes the Case For Unit Testing
In six weeks, two verification engineers discovered 10 defects in the UVM-1.1d release. The defects were not lurking in the shadows of complex corner scenarios; it didn’t take hours of constrained-random stimulus to discover them; nor was there any need for functional coverage with intelligent feedback. These were simple, obvious defects discovered with an equally simple and obvious technique. It’s called unit testing and it comes from software development where the value placed on code quality and maintainability is higher than what we aspire to in hardware.
UVM-UTest is an open-source project with the goal of demonstrating how unit testing can be used to bring software level quality and maintainability to design and testbench code. This paper documents the development of UVM-UTest, how it was conceived, statistics and style for unit tests written, its success in terms of defects found in UVM-1.1d, the lessons learned and recommendations for applying unit testing beyond the UVM. The paper concludes by introducing a new UVM component, the uvm_boat_anchor, that further demonstrates the perils of continuing to ignore the value of this basic yet entirely necessary technique.
I’ve submitted proposals twice in the last few years. Neither was accepted. Agile development and test-driven development – the topics I’ve submitted – aren’t your average mainstream, audience grabbing topics so that wasn’t overly surprising. Maybe this proposal is different. Or maybe it’s not. Either way, I’m leaving it for you to decide.
If there’s enough interest, I’ll submit it and see what happens. If you really like the abstract, share it with your colleagues so they can vote, too. Or if you hate it, may as well share it, anyway. I don’t mind people seeing my incredibly terrible ideas. I’ll just entertain myself with it.
Proposals are due Aug 27th so share the link and get your votes in asap!