By Example: Done vs. DONE

In a previous post, When Done Actually Means DONE, I shared a slide that I’ll present at Agile2011. I use it to illustrate the differences between waterfall and agile development models in the context of hardware development. After posting that, the first response I got was examples could maybe make it even clearer from AgileSoC guest blogger @rolfostergaard.

Thanks Rolf. Good idea.

In case you haven’t read When Done Actually Means DONE, I’ve included the slide I used to get things started again in this post. I use it to show that there are different ways to describe how done you are based on the development model you’re using. If you’re basing progress on tasks you’ve completed, you’re using done to measure progress. If you’re using features, you’re using DONE.

What’s the difference? Being done means you’ve hit a milestone that won’t hold water mainly because there’s no way to objectively measure its quality. You may think you’re DONE, but without tests or some other reasonable way to measure quality, you’ll very likely need to come back to it. For that reason, done is misleading and it gets people into trouble.

DONE means you’ve hit a milestone that you can unambiguously quantify (or at least quantify with far less ambiguity). Here, you’re confident that what you’ve just finished will require very little or no follow-up because you can see and measure results.

In short, done isn’t done at all… but DONE is. Confused? Here’s where a few examples might help.

My RTL Is Done

Classic. Your design team is under pressure to meet a scheduled RTL complete project milestone. As always, it’s a high visible milestone – to the development team, management and possibly even customer – because it comes with the connotation that the product is nearly finished… save for the minor details that it hasn’t been verified nor has it been pushed through the back-end. The RTL is done though, so that’s great. Cross it off the list!

My Test Environment Is Done

This is a close second to my RTL is done. Your verification team has finished its test environment and supposedly all there is left is writing and running tests. Though of course there’s been very little to confirm that the test environment does what it is supposed to do. That’s immediately obvious when running the first test: the configurations are invalid, the stimulus transactions are poorly formed, the BFMs don’t obey protocols and the model is outdated; all unfortunate because now people are anxiously expecting results that the environment can’t quite deliver yet! Sure the test environment is done… except for everything that doesn’t work.

Feature <blah> is DONE

Now we’re getting somewhere. No, your RTL isn’t done. No, your verification environment isn’t done. But who cares! You have something better: a small portion of both are DONE and that’s enough to run tests and collect coverage results that verify feature <blah> is ready to go. No ambiguity there. The feature works and you have the passing tests to prove it. You’re delivering something that’s DONE.

(Ideally, you would have passed the design to the physical team as well. But by the fact that you’ve made a big step forward in terms of credibility and confidence relative to the first two examples, we’ll forget about the physical design for now.)

The Software API Is Done

A hardware team normally implements an API according to a spec received from the software team. After the hardware team is done, it’s assumed that sometime later the software team will build drivers and applications on top of the API and release the finished product. Problem is, the initial API was a best guess from the software team and early in their development, the team finds that the API that’s been sitting done for a couple months doesn’t give them the access to the hardware that they need. Sure the API was done, but until it’s updated, system performance is seriously restricted and some functionality is completely absent.

The Software Demo Is DONE

An SoC, by definition, is a part hardware, part software solution. So why settle with an API that’s done when the software is required for delivery? As the hardware team completes API functionality, give it to the software team so they can actually use it. Deliver it as a C model, an emulation platform or some other form that makes sense. Use this demo version of the entire solution (hardware + software) to judge whether or not you’re DONE.

I’m Still 90% Done

I’ll end with a personal favorite that kind of fits into this discussion. Like everyone else, I’ve played this card many times. What does 90% done mean? It means you think you’re almost DONE but you really have no idea because there’s no way of knowing for sure. Before you say it again, do yourself a favor:

  1. admit that you don’t know how DONE you are
  2. find a way to measure what you think you are DONE
  3. isolate what you aren’t DONE

I’m going to try and follow my own advice on that one :).

Done is a false milestone. It’s ambiguous. It’s one dimensional; you may have written some code but that’s it. There’s no reliable way to measure done and teams that measure progress in terms of done eventually find they’re not as DONE as they thought.

DONE on the other hand, comes with results. It’s multi-dimensional; you’ve written some chunk of code and it’s been tested so you know it works. DONE is measured in passing simulations, software demos and any other means that objectively confirm the code you’ve written is high quality. Teams that measure progress in terms of DONE know how far they’ve come and how far they have to go.

Done is a feeling. DONE is progress.

-neil

Q. What examples of done do you see in SoC development?

Agile Hardware Starts As A Steel Thread

For me, a key to agile is incremental development. Most software developers reading this will probably say “duh… no kidding” but it’s a new concept to hardware folks.

If you’re new to agile, incremental development is something I’ve talked about in several articles on AgileSoC.com. In a nutshell, it’s where product features are designed, implemented, tested and ready to go in small batches. Products start small but functional then grow in scope until they are delivered (they can also be delivered *as* they grow but I’m not going there today).

Because most hardware teams are used to developing products as big bang feature sets, incremental development can be a big step. To help teams get started, I put together an article called Operation Basic Sanity: A Faster Way To Sane Hardware that spells out how a team can deliver a small batch of features equivalent to a functionally sane device. That article was actually inspired by an exercise I called the “Steel Thread Challenge”.

Steel Thread is a term I’ve seen to describe some minimal thread of functionality through a product. I think about it as being able to <do something simple> to <some input> so that <something basic> happens on <some output>. As a hardware guy, a steel thread seems synonymous with a sane design. It’s small relative to what remains but significant in that you know the design is doing something right.

The Steel Thread Challenge: How-to

What You Need: The Steel Thread Challenge is a retrospective exercise that works back from a finished product. Choose a design that you’ve just finished or is at least well into development. You’ll also need a conference room with some whiteboard space.

Who You Need: You’ll focus on front-end development so include designers, verification engineers and modeling experts as well as the system architects and managers.

Step 1: Describe the product: On a whiteboard, draw a block diagram that includes the design and test environment. You should end up with something like this (except the blocks should be labelled)…

Step 2: Find the steel thread: Decide as a group what constitutes a steel thread (HINT: it should be a simple function that provides some tangible outcome). Identify the main processing path of the Steel Thread by drawing a line through your block diagram. That should get you to this point…

Step 3: Remove everything you don’t need: The goal is to find the smallest code footprint that supports the Steel Thread. By analyzing how your Steel Thread travels through the design and test environment, erase everything that isn’t needed (best to take a picture of what you have so you can redraw it if necessary!). First erase entire blocks if you can. If logic can be moved or simplified to remove entire blocks, make a list of necessary simplifications and then erase those blocks. From the blocks that remain, make a list of the features that aren’t necessary and could be ripped out. That should leave you with a list of what’s been removed and a diagram like this…

Step 4: Planning for the steel thread: Now the “challenge” part of the Steel Thread Challenge. This is a mock planning exercise where as a group you discuss how you would build, test and deliver a Steel Thread starting from a clean slate. Pretend your Steel Thread is your product and you have to deliver it asap. How would you get there and how would it be different from what you normally do?

  • would the code you write be any different than usual?
  • would teamwork and/or organization be the same or different?
  • what would your schedule look like?
  • what would your documentation look like?
  • would your task assignments be any different than normal?
  • how long would it take to deliver the steel thread?
  • is there planning that could be delayed relative to what you normally do?

If you and your team try this exercise, I’d like to hear how it goes. If you have other ideas for helping people jump the “big bang to incremental development” hurdle, I’d like to hear that also! This will be a big step for many teams so the more options we have, the better!

Neil

The Newbie’s Guide To AgileSoC.com

For anyone that’s new to AgileSoC.com, here’s a guide to what we have. I have all the top ranked articles here. I also have my favorites… these articles aren’t necessarily the most popular but they’re the ones that I’m happiest with. Finally, a couple of sleeper articles.

…and don’t forget to follow the discussions on the linkedin group!

Top Ranked

  1. UVM Is Not A Methodology: This one is top ranked by a mile. Primarily for the verification engineers out there, this article discusses what teams need to keep in mind when adopting technology like UVM.
  2. Top-down ESL Design With Kanban: This article came together as I was reading 2 different books (ESL Models and their Applications (1st edition) and Kanban and Scrum: Making the Most of Both). It combines the modified V approach to system development that Brian Bailey and Grant Martin present and Kanban, which Bryan Morris and I have always thought as being hardware friendly.
  3. An Agile Approach To ESL Modeling: This is a general article for the ESL crowd. Why is modeling important, how modeling can fail and how agile can help modeling teams succeed.
  4. Agile IC Development With Scrum – Part I: the first of a two part video of the paper Bryan and I presented at SNUG San Jose in 2010. In the video, we talk about how hardware teams would have to evolve to adopt Scrum.
  5. IC Development And The Agile Manifesto: The Agile Manifesto spells out the fundamentals of agile development. This article shows how the manifesto is just as applicable to hardware development as it has been to software development.

My Favorites

  1. Operation Basic Sanity: A Faster Way To Sane Hardware: agile makes sense to a lot of people but getting started can be tricky to say the least. I like this article because it gives news teams a way to get started without changing much of what they already do.
  2. Top-down ESL Design With Kanban: top ranked on the site and also one of my favorites.
  3. Agile Transformation In Functional Verification – Part I: I think this is another good article that helps verification teams take the mental leap into agile development.

Sleeper Articles

  1. Realizing EDA360 With Agile Development: If you’re not into the EDA360 message from Cadence, then the title might scare you away. But this isn’t just more EDA360. The theme here is convergence in hardware development, how functional teams drift apart over time and how agile can bring them back together.
  2. Why Agile Is A Good Fit For ASIC and FPGA Development: I think this was the first article we posted. I go back to it periodically just to see if our early writing still makes sense. I think it does!

Neil

A Better Way To Manage Conference Proposals

UPDATE: With acceptance emails going out for Agile2012 this morning, I thought it’d be a good time to re-spin this entry from last year’s run-up to Agile2011. If you substitute 2012 for 2011 and Dallas for Salt Lake City, I reckon everything else I had to say last year applies again this year. I like the process.

Special note for this year’s conference, Agile2012 will have 2 talks on ASIC and FPGA development. My proposal for “TDD And A New Paradigm For Hardware Verification” was accepted. As well, Tobias Leisgang from Texas Instruments had a talk accepted, How to play basketball with a soccer team? Making IC development more agile. You’ll find both on the Emerging Applications of Agile stage.

One hardware talk at Agile2011, two hardware talks for Agile2012… hopefully there’s a pattern developing here!

As I mentioned in a post a couple of weeks ago, submitting a proposal to Agile2011 was a last minute idea for me. I figured that since I had a collection of material ready to go, it wouldn’t take me long to put something together. I was almost right.

I’ll start by saying that the submit/review process for Agile2011 was about as transparent as I could have imagined. I’m used to writing an abstract, pulling together an outline, sending both away and then waiting for a yay or a nay a couple months down the road. That’s how SNUG generally works (I’ve presented 5 papers there… last in 2010) and DVCon as well (in 2010, Bryan Morris and I submitted a paper called Agile Techniques For ASIC and FPGA Development… it was rejected).

I’ve never had a problem with that, but the submission process for Agile2011 was different… dare I say better different… but I’ll let people decide for themselves.

The submission process was all handled online. Would be presenters login to write a summary of their talk in 600 characters or less, describe a set of “learning outcomes” (which I believe tells the audience what they should be taking away from the talk and who we think should attend) and finally the “Process/Mechanics” section which I think most people use to capture their outline. You also select a stage (of which there are several) where your proposal best fits. Here’s a description of the stages if you’re interested. I’ll talk more about them another time.

Pretty normal so far, except that all proposals are treated as proposals-in-progress so you can edit them at any time. They are also viewable by other presenters and anyone else who is interested, not just the official reviewers. I liked that. Seeing and discussing proposals before they’re accepted was something I wasn’t used to. I could contact other would be presenters to ask questions. We could also work together to avoid overlap. I’m guessing that makes for a better conference better by helping avoid repetitive material. We’ll see in August if that’s actually the case.

Also interesting was that there were options for session length/type. I believe there are 30min/60min/90min talks. There are also 3hour workshops and tutorials. Last are 10min lightning talks. Having different types of sessions is new to me, especially the workshops. Agile is supposed to be interactive so I’m thinking that the workshops end up being a conference highlight.

The best part though was that anyone could append comments to a proposal. That’s more feedback that I wasn’t used to and really appreciated. On average, each proposal got maybe a half dozen comments. From what I’ve seen and heard, comments were pretty constructive and helped would be presenters tune proposals over a number of weeks (mid-January to late march). All the comments I received motivated me to tweak my proposal. I’m guessing that most people ended up with proposals that were stronger than what they started with… I know mine was… so I’m pretty grateful to the people that commented.

From there, the official reviewers for each stage made their comments on each proposal after which I’m assuming they consulted with each other and the conferences organizers to decide which proposals were accepted.

Not an exact explanation of what happens but close enough.

Between the initial write-up, answering reviewer questions and revising my outline and summary, I spent maybe 4-5 hours on my proposal. Looking through what other people had, there are some that definitely spent more than that. There were also people that had 2 or 3 proposals… not sure where they found the time for it! I’m taking all the effort that people obviously put into their proposals as a sign that the prep time ends up being worth it 🙂

If you’re interested, you can take a look at my proposal for Applying Agile In IC Development… We’re Not That Different After All and the feedback it received (I’m excited that it was accepted to the Agile for Embedded Systems stage). Feel free to let me know what you think of it. You can find more details regarding how proposal are reviewed/accepted/etc in a write-up by Mark Levison on his blog. If you interested in seeing how Agile Alliance has things setup, you can go to http://submit2011.agilealliance.org.

Neil Johnson

Lessons From A Software Conference – Part 1 of Several

Agile2xxx is a conference put on the Agile Alliance that has happened every year since 2003 (I think??). This year, roughly 1500 software developers from hither and yon will meet on August 8th at Agile2011 in Salt Lake City to hear colleagues speak, participate in workshops and discuss the next big things in software development. It’s not the only agile conference of the year but it’s a big one from what I understand.

I’ve enjoyed the opportunities I’ve had to present at conferences. But more so than at other conferences, Agile2011 is a chance for an experience that’s quite different. Why you ask? Well at hardware conferences, I talk to my peers–people that do the same things I do (which is mainly hardware verification). Acceptance and feedback can be mixed, but people normally get the gist of what I’m talking about. Compare that to me speaking at Agile2011.

  • Agile2011 is a software conference… I’m a hardware guy.
  • Attendees (I’m assuming ) are experienced with agile… me, experienced’ish.
  • Presenters (again I’m assuming here) are experts with agile… me, not exactly.

Hardly an exact fit. So why then would I submit a proposal? Admittedly, that was a last minute twitter-initiated decision.

Compared to agile software experts, I’d consider myself a rookie. A very enthusiastic rookie with decent experience but a rookie nonetheless. I have a long list of ideas for what should work in hardware development. I’ve done small scale rollouts of some of those ideas that have turned out successful. I’ve put a lot of time into it and because I’ve already given presentations and written about what I’ve been doing on www.AgileSoC.com, I figured I could cobble together a proposal relatively quickly. The submission process was an interesting experience. I’ll get to that in the next entry (spoiler alert: I did submit a proposal and it was accepted).

Back to the why… the first reason for me submitting a proposal to Agile2011 is to get expert opinions from others on what I think should work. I’m convinced that agile can work in hardware development, even though it is relatively unknown. I’ve seen it work on a small scale. The software guys use it so successfully that it’s ridiculous to think that we can’t use some of what they’ve already shown to work. The obvious differences mean it can’t be exactly the same, but the obvious similarities mean it won’t be entirely different either. Presenting my thoughts to a conference crowd gets ideas heard, critiqued and (hopefully) validated.

The second reason (which is where the title of the blog comes from) is that I actually intend to learn something from these strangers from the land of software! From the submission process to the preparation and the presentations, workshops, side discussions and everything else. It’s a different crowd and a different experience so I expect to be bombarded with great new ideas from a few different angles.

Finally, I would hope software developers listening to me talk about agile in hardware development will get something out of it also. If I didn’t think I could pull that off there wouldn’t be much point in me presenting anything!

Agile is big in the software community but relatively unheard of in the hardware community. I think AgileSoC.com has started to change that. I hope a presentation at Agile2011 is a good next step.

Stay tuned for the trials and tribulations 🙂

Neil Johnson

More AgileSoC? Are You Kidding Me?

AgileSoC.com is an idea that Bryan Morris and I started about 2 years ago after a few long conversations at SNUG San Jose in 2009. Bryan had been interested in agile software development for a while, I was completely new to it. Even though I didn’t know anything about agile, it didn’t take long for me to buy into the idea that it made sense. A lot of what happens in hardware development, particularly in the front-end design and verification, is pretty similar to what the software guys do. Sure the packaging and delivery is different (to varying degrees depending on what your target technology is) but there are a lot of day-to-day activities that are similar. In the last couple years, I’ve heard many people say that agile “makes sense”. I think those people are right.

Admittedly, I have a pretty short span of attention so I was interested to see how long I’d stay interested. I figured 6 months or so and I’d start to wear out. Turns out I was wrong.

We presented our first paper titled A Giant, Baby Step Forward: Agile Techniques for Hardware Design at SNUG Boston in the fall of 2009. That was about 90% book report and 10% practice/observation. It was good for a first crack but I think we did better at SNUG San Jose 2010 with Agile IC Development With Scrum. We have video of both presentations here. In between, I did a presentation to a group of software developers at a Calgary APLN meeting in Nov 2009. That was a little stressful because it was my first time talking to people that know and use agile. Since then we’ve had 2 articles posted in the Mentor Graphics Verification Horizons newsletter, Agile Transformation In IC Development and Agile Transformation In Functional Verification. We’ve also had entries in the Cadence community blogs, a nice write up by Richard Goering and a guest entry that I put together.

Time has passed and we have we’ve been writing articles pretty steadily. It’s been 2 years and if anything, my interest has grown. As of now, we have 17 articles posted on AgileSoC.com by 4 authors. The number of articles slowly grows as we stumble into new ideas and find the time to put them together.

And now, for some reason, I think I need a blog so I can spend more time on this stuff. I shake my head as I type that but here I am anyway. My first guess is that this will be an informal dumping ground for ideas and/or experiences that aren’t good enough to support “real” articles on their own. Not entirely sure so I’ll start with that and see where things end up!