The First Step is Acceptance (Hardware Verification is Broken)

A couple weeks ago, I had the chance to do a lunch-n-learn seminar for about 20 verification engineers in Mountain View. It was an hour talk about an incremental approach to functional verification; one that I’ve given about a half a dozen times to various teams.

I like giving this talk because I find people on the whole are surprisingly receptive to the ideas. There’s also been some skepticism, though, that I’ve seen more than once. Seeing as how it’s still fresh in my mind, I figured now would be a good time to pass some of this skepticism on as food-for-thought. Continue reading

TDD Applied To Testbench Development

When we were writing about TDD back in November 2011 during our TDD month, admittedly I had very little experience with it. The goal with TDD month was to spread the word and drum up a little interest in a technique the the software folks have been using successfully for years. I’d used it on a small scale but lacked the experience to back up a lot of what I was writing.

Over the last few months though, that’s changed. I’ve spent a good amount of time collecting feedback from and supporting SVUnit early adopters. That’s been good for getting a feel for how others in hardware development see TDD. Just as important, I’ve been using TDD myself to build testbenches and it’s been going well… very, very well. I’m at the point where I’m going through a pretty repeatable cycle that I think could be useful for others contemplating TDD. I’ve talked about this cycle before, but it’s worth revisiting now that I’ve been through it. Continue reading

Guest Blog: I Tried It (Operation Basic Sanity) And I Liked It!

We’re happy to have another guest contributer to the AgileSoC blog. This one is special because it’s a case study on an exercise I’ve written about before and presented as part of my talk at Agile2011. I’ve called it The Steel Thread Challenge to reach the software crowd and Operation Basic Sanity to reach the hardware crowd. These are 2 different names for achieving on goal: finding ways to help hardware developers think in terms of early functionality and tangible milestones.

Catherine Louis, a long time agile coach and trainer, has tried the exercise on more than one occasion with SoC development teams. I’m happy to report she’s had some success with it and has agreed to take us though what she’s done (Note: I like the way she uses the term “tangible outcome”).

With that, I’ll turn it over to Catherine. Thanks again!

I’ve had some great success taking Neil’s Basic Sanity presentation (Agile2011, available here) as a retrospective exercise for hardware teams interested in learning about adaptive hardware development. (a.k.a “Agile”.)

If others have not tried it, it is time you give it a whirl.

Continue reading

UVM Still Isn’t A Methodology

A few months ago, I posted on an article on AgileSoC.com titled UVM Is Not A Methodology. The point of that article, was to encourage people to break away from the idea that verification frameworks like UVM truly deserve the label ‘methodology’.

In the article, I argue that to be called a methodology requires a number of other considerations go beyond the standardized framework and take into account the people using it:

  • A Sustained Training Strategy
  • Mentoring Of New Teammates
  • Regular Review Cycles
  • Early Design Integration
  • Early Model Integration
  • Incremental Development, Testing and Coverage Collection
  • Organization Specific Refinement

That article generated a lot of interest and a few comments. I got some complements from some people and some minor disagreement from a few others. I’ve summarize a few below.

Neil, you’ve outdone yourself. Excellent article IMO.

We’ll start with a good one :). I got that comment from a friend, a real expert in functional verification and a person I learned a lot from when I decided to specialize in functional verification. I really appreciated that.

Neil, UVM _is_ a methodology, but not in the same sense as you have described in your blog post. It is a methodology that provides a framework for designing and building testbenches… You are correct that it is not a complete verification methodology covering all the things you list. All of those things certainly have to be addressed by each verification team, but can they be standardized? Training, mentoring, and reviews are essential to a good verification methodology, but it would be difficult to provide meaningful standards beyond what is already in the software engineering literature. (read the unedited comment here)

That isn’t totally agreement but I took that comment as an acknowledgement nonetheless that in order to be complete, a methodology has to go beyond the framework to include some of the things that I identified. The fact that they can’t be standardized is a good reminder that teams using frameworks like UVM need to worry about the extra stuff themselves. No one can do that for them.

The article makes some valid points but falls into the same trap that I think a lot of people do; i.e. getting confused between the UVM code library and UVM the methodology … My personal view … is that the customer teams who insist on self-learning the eRM/URM/OVM/UVM library or the (e, SC, or SV) language and won’t ask for training or guidance are the ones who end up getting the least from the methodology…These same users achieve far less reuse and randomisation than they should be getting, and the effectiveness of their testing is much lower than it should be. Where customers have invited me in to help architect their flow … we’ve created some really nice modular environments that are reusable, maintainable and adaptable. These customer teams have become more productive and have then propagated that methodology experience to other teams in their company, much as the article describes. This is the real UVM, where code meets experience … I’m saying this not to boast about…my own abilities, but to stress the point that getting all fired up about a code library isn’t going to make you more productive or effective, there’s a whole lot of hard work and care goes into really building a good UVM testbench. (read the unedited comment here).

Again, that comment isn’t total agreement but it does highlight the fact that UVM is larger than a code library and that mentoring plays a big part in how effective teams can be when using it.

The last paragraph of that article states it properly. It is a framework you can build a methodology around, but it is not a methodology in and of itself. The fact that it took the author 1600 words to state that simple fact indicates the article is mainly marketing fluff from a consulting company trying to scare managers. Much like calling UVM a methodology is EDA marketing fluff to convince managers it is more than just a framework. (read the unedited comment here)

That one was my favorite! Complete agreement followed up with the accusation of fear mongering!

Always nice to see people taking the time to comment 🙂

neil

Q. What do you call UVM? Methodology? Framework? Something else?

Remote Developers And The Feature-of-the-week

In all the discussions I’ve had regarding agile and from all the presentations I’ve seen, articles I’ve read, etc, the most valuable thing I’ve heard so far has been what I’ve been calling the feature-of-the-week.

The feature-of-the-week was something I picked up at the very first APLN (Agile Project Leadership Network) group meeting I attended here in Calgary. The presentation was given by a fellow named Jonathan Rasmusson who is a Calgary-based agile coach and software developer (he’s got a blog here). Jonathan was giving an introductory level presentation to the group and talked quite a bit about how to get started. The best advice he had (which is also the best I’ve heard so far) was “don’t get caught up in trying to define agile for a team as part of some big master plan or organizational overhaul… just tell someone you’re going to deliver something that works in a week, then do it. That’s agile”.

I’m not sure if Jonathan actually said feature-of-the-week that day or if I imagined it afterward. If he did say it, I don’t know if he’s the one that actually coined the term, but it doesn’t matter. I credit him for the simplest yet most useful thing I’ve heard so far regarding agile methods in hardware development.

That was an important day for me because I do most of my work with clients remotely, which means I work as part of a development team but instead of being on the other side of the cube wall, I’m on the other side of a phone call. Remote development is not an ideal arrangement but it can work quite well. The communication barriers are obviously higher and I’ve seen that make things difficult. I’ve come to wonder, though, whether its the distance between remote developers that makes things difficult or the way we work and how we measure progress that presents the real difficulties. It was my first time attempting the feature-of-the-week that got me wondering.

In case I haven’t made it obvious, the feature-of-the-week entails writing, testing and releasing code for some feature in a week or less. Not just writing or testing some chunk of code, writing and testing a feature then releasing it to the rest of the team, all in a week or less! A week was a fast turnaround for me since I was used to writing a few weeks worth of code, then coming back to see if it worked.

The first time I tried this with a client, it was obvious that I was committing to a focused and intense approach to development. I was onsite for a week getting acquainted with the project and while onsite, I committed to a delivery from home the following friday. Granted, it was a pretty small delivery but I didn’t realize how hard it would be until I sat down monday and glanced at the calendar. Friday was only 5 days away and I had a lot of work to do.

I’ll spare you all the details, but in a nutshell what started as a series of daunting delivery milestones ended up being the best thing I’ve done as a verification engineer. With the ability to demonstrate my work every 5-10 days (yes… the feature-of-the-week ended up being the feature-of-the-week-or-2) I could show that either:

a) I knew exactly what I was doing; or

b) I was out in left field and in need of help.

Demonstrating – as opposed to telling – is key and the purpose of that wasn’t to prove I was actually working on the other end of the VPN, it was to close a tight feedback loop with the designer. I could ask a question about a feature, code the answer immediately, run it and ask for confirmation soon after (i.e. “take a look at this sim… is that what you meant Tuesday when you said <blah blah blah>?”).

That tight feedback loop kept me on track. I’ll admit to having a few misses, but I was able to recover quickly every time. To understand how the quick recovery was possible, try debugging 5 days worth of code and then 2 months worth of code. Which is easier?

From that experience, these are the lessons I learned:

  • Describing “lines/classes/modules/tests written in a status report” will never be as reliable as “code done and demonstrated”
  • Short tasks are easier to estimate and complete on time than long tasks
  • 1 week screw-ups are easier to fix than 2 month screw-ups
  • As a remote developer, regularly releasing code to the rest of the team is the only way to show them you know what you’re doing.

The last lesson was most important. I’m now convinced that perceived productivity limitations of remote developers aren’t in fact caused by distance alone. They are a product of the way people work together and how they measure progress. Simply put, the feature-of-the-week with it’s weekly deliveries, tight feedback loop and increased transparency made me more productive as a remote contributor (if you want more details, you can tune into an open discussion on our AgileSoC Linkedin group).

A valuable lesson from a very simple idea: telling someone you’re going to deliver something in a week… and then actually doing it!

neil

Q. How long do you write code before you test and release it?

Q. How do you communicate progress as a remote developer? Status reports or passing tests?

Agile Hardware Starts As A Steel Thread

For me, a key to agile is incremental development. Most software developers reading this will probably say “duh… no kidding” but it’s a new concept to hardware folks.

If you’re new to agile, incremental development is something I’ve talked about in several articles on AgileSoC.com. In a nutshell, it’s where product features are designed, implemented, tested and ready to go in small batches. Products start small but functional then grow in scope until they are delivered (they can also be delivered *as* they grow but I’m not going there today).

Because most hardware teams are used to developing products as big bang feature sets, incremental development can be a big step. To help teams get started, I put together an article called Operation Basic Sanity: A Faster Way To Sane Hardware that spells out how a team can deliver a small batch of features equivalent to a functionally sane device. That article was actually inspired by an exercise I called the “Steel Thread Challenge”.

Steel Thread is a term I’ve seen to describe some minimal thread of functionality through a product. I think about it as being able to <do something simple> to <some input> so that <something basic> happens on <some output>. As a hardware guy, a steel thread seems synonymous with a sane design. It’s small relative to what remains but significant in that you know the design is doing something right.

The Steel Thread Challenge: How-to

What You Need: The Steel Thread Challenge is a retrospective exercise that works back from a finished product. Choose a design that you’ve just finished or is at least well into development. You’ll also need a conference room with some whiteboard space.

Who You Need: You’ll focus on front-end development so include designers, verification engineers and modeling experts as well as the system architects and managers.

Step 1: Describe the product: On a whiteboard, draw a block diagram that includes the design and test environment. You should end up with something like this (except the blocks should be labelled)…

Step 2: Find the steel thread: Decide as a group what constitutes a steel thread (HINT: it should be a simple function that provides some tangible outcome). Identify the main processing path of the Steel Thread by drawing a line through your block diagram. That should get you to this point…

Step 3: Remove everything you don’t need: The goal is to find the smallest code footprint that supports the Steel Thread. By analyzing how your Steel Thread travels through the design and test environment, erase everything that isn’t needed (best to take a picture of what you have so you can redraw it if necessary!). First erase entire blocks if you can. If logic can be moved or simplified to remove entire blocks, make a list of necessary simplifications and then erase those blocks. From the blocks that remain, make a list of the features that aren’t necessary and could be ripped out. That should leave you with a list of what’s been removed and a diagram like this…

Step 4: Planning for the steel thread: Now the “challenge” part of the Steel Thread Challenge. This is a mock planning exercise where as a group you discuss how you would build, test and deliver a Steel Thread starting from a clean slate. Pretend your Steel Thread is your product and you have to deliver it asap. How would you get there and how would it be different from what you normally do?

  • would the code you write be any different than usual?
  • would teamwork and/or organization be the same or different?
  • what would your schedule look like?
  • what would your documentation look like?
  • would your task assignments be any different than normal?
  • how long would it take to deliver the steel thread?
  • is there planning that could be delayed relative to what you normally do?

If you and your team try this exercise, I’d like to hear how it goes. If you have other ideas for helping people jump the “big bang to incremental development” hurdle, I’d like to hear that also! This will be a big step for many teams so the more options we have, the better!

Neil

The Newbie’s Guide To AgileSoC.com

For anyone that’s new to AgileSoC.com, here’s a guide to what we have. I have all the top ranked articles here. I also have my favorites… these articles aren’t necessarily the most popular but they’re the ones that I’m happiest with. Finally, a couple of sleeper articles.

…and don’t forget to follow the discussions on the linkedin group!

Top Ranked

  1. UVM Is Not A Methodology: This one is top ranked by a mile. Primarily for the verification engineers out there, this article discusses what teams need to keep in mind when adopting technology like UVM.
  2. Top-down ESL Design With Kanban: This article came together as I was reading 2 different books (ESL Models and their Applications (1st edition) and Kanban and Scrum: Making the Most of Both). It combines the modified V approach to system development that Brian Bailey and Grant Martin present and Kanban, which Bryan Morris and I have always thought as being hardware friendly.
  3. An Agile Approach To ESL Modeling: This is a general article for the ESL crowd. Why is modeling important, how modeling can fail and how agile can help modeling teams succeed.
  4. Agile IC Development With Scrum – Part I: the first of a two part video of the paper Bryan and I presented at SNUG San Jose in 2010. In the video, we talk about how hardware teams would have to evolve to adopt Scrum.
  5. IC Development And The Agile Manifesto: The Agile Manifesto spells out the fundamentals of agile development. This article shows how the manifesto is just as applicable to hardware development as it has been to software development.

My Favorites

  1. Operation Basic Sanity: A Faster Way To Sane Hardware: agile makes sense to a lot of people but getting started can be tricky to say the least. I like this article because it gives news teams a way to get started without changing much of what they already do.
  2. Top-down ESL Design With Kanban: top ranked on the site and also one of my favorites.
  3. Agile Transformation In Functional Verification – Part I: I think this is another good article that helps verification teams take the mental leap into agile development.

Sleeper Articles

  1. Realizing EDA360 With Agile Development: If you’re not into the EDA360 message from Cadence, then the title might scare you away. But this isn’t just more EDA360. The theme here is convergence in hardware development, how functional teams drift apart over time and how agile can bring them back together.
  2. Why Agile Is A Good Fit For ASIC and FPGA Development: I think this was the first article we posted. I go back to it periodically just to see if our early writing still makes sense. I think it does!

Neil