Since posting The Great Agile Hardware Myth last week, I tried to think of some obvious myth that exists in the mainstream; some claim that we’ve all made that, without fail, turns out to be absolutely and entirely false. Took a while but I think I found it. We could have called it The RTL Done Myth, but I chose to call it the The 90% Done Myth.
|We’re 90% done. If everything goes well, all that’s left is testing and debugging this last 10%.||The gant chart that says we’re 90% done is lying to us. We know it’s lying to us because it’s lied to us before, many times. We’d rather not use this gant chart but because tracking development progress is so important and we haven’t figured out a better way to do so, we feel like we have no other option than to keep trying to believe it.|
(NOTE: I think of the 90% Done Myth as being most familiar to RTL and verification engineers only because of my experience. If that’s not you, it may still apply but you’ll have to use your imagination.)
The 90% Done Myth starts with design and verification engineers building their list of tasks, adding them to the project schedule, then working to cross things off the list. For the first few months everything always looks pretty good on the gant chart because we’re writing lots of documents and code and getting lots of stuff done.
- Build a schedule… done
- Write a design document… done
- Write a testplan… done
- Review some documents… done
- Write <these> RTL blocks… done
- Write <those> testbench components… done
- Integrate <these> with <those>… done
- Write <some script> for running tests… done
- Testing… in progress
Where code is concerned we usually mean done except most of the testing. There’s also a few little things here and there that don’t show up anywhere but that’s just easy stuff. Get the code done and the gant chart expects everything from there to go smoothly.
Secretly we know better though it’s always hard to start with the worst case scenario (aka: likely scenario) that testing is going to take longer than we first suggested. Sooner or later everyone finds out anyway when we put the design and testbench together only to find the quality of code we’ve written is terribly poor. From there, we struggle for months, fixing each bug, hoping it’s the last, only to bump into the next. Going from bug to bug to bug, we never really know where we are but we always feel like this next bug could be the last. In our status reports we pressure ourselves into suggesting we’re close to being done. Of course that’s not the case until the very end so we sit in status meetings trying to explain for the 8th week in a row why this time we’re finally definitely 90% done.
Busting the 90% Done Myth for Developers
Trying to explain why you’re still 90% done for the 8th week in a row isn’t fun. You’re stressed and you start doubting yourself but you also still feel like you’re almost there… which is why it’s so hard to break the cycle. It’s time to trust your experience. You’re not almost there. Here are some ideas for avoiding the stress and uncertainty of 90% done.
- Resist the practice of task-based planning and reporting: planning and reporting tasks is what gets us into a mess so a good place to start is to avoid tasks altogether. Documents don’t count as progress; neither does code written. Without some way to measure the quality of what we’re producing, there’s too much ambiguity with task-based development.
- Use feature-based development: instead of completing tasks, build features. Prioritize them in terms of importance and complete them one at a time.
- Measure progress with passing tests: An objective way to measure progress is to stamp features as DONE when they are documented, coded completely and tested exhaustively and you’re confident there’ll be no need to come back to them. “Done except for…” is not done.
- Deliver passing tests as soon as possible: To suggest managers will be nervous dropping the task-based progress metrics they’ve quite likely used their entire career is an understatement to say the least. The only way I’ve found to calm nerves and build trust in objective progress metrics is to deliver progress as soon as possible, then keep delivering. To be clear, “as soon as possible” is not months from now nor is it weeks. We’re talking days. Deliver progress before anyone expects it and your boss will see you’re on the right path.
Busting the 90% Done Myth for Managers
You don’t want to see it again… the rosy pictures painted by early progress reports transitioning seamlessly into a perpetual cycle of confusing status updates and weekly extensions (i.e. “We’re still 90% done. We just need to <blah>. We should be done next week.”). From there, you’re in panic mode scrounging for people, tools and licenses that’ll help get things back on track with limited budget. Here’s some tips that can help avoid getting stuck at 90% done next time around.
- Insist on objective feature-based metrics: Ask questions like “has this code/feature been exhaustively tested?” or “if we stopped today could we ship this feature?”. Objective feature-based development will help you avoid the ambiguity that comes with task-based estimates and reporting.
- Limit work-in-progress: It’s easy for people to be overwhelmed by long SoC requirements documents. Limiting work-in-progress will condense the problem space in play at a given time and help your team keep its focus. Encourage development in small batches that take a couple weeks to a month or two to complete.
- Observe progress, don’t plan it: Using objective feature-based progress metrics and observing the rate at which your team is coding and testing features will help you build evidence based delivery estimates. They’ll help you predict major delays, build contingency plans and prep upper management long before the usual panic sets in. An example…
- We’ve got 20 features on our list and the team has only tested (finished) 3 so far. Even though they say they’ll be done on time, if we continue at this pace it looks like we’re going to be late by a couple months. It’s still early so now would be a good time to ask for more people or propose we cut scope to meet our schedule.
I’m guessing we’ve all been under the influence of The 90% Done Myth at least once in our careers. It’s fueled a lot of last minute blazes that make signing off an SoC or ASIC or FPGA or IP or whatever you’re building so stressful. Feature-based development can help bust through the 90% Myth and alleviate that stress next time around.
4 thoughts on “The 90% Done Myth”
I agree with the basic premise of this post, but doesn’t the assertion that you have to switch from task-based (or phase-gate) development to feature-based development fly in the face of the “agile isn’t all-or-nothing” from your last post? In my experience, changing from task to feature-based development is a fundamental shift in project management that, done properly (on large projects), ripples into how your architects, RTL coders, verification and backend folks work.
This also introduces concepts like iterative design processes where some phase like writing fault-grade tests manually (where ATPG is not feasible) should be completed per-feature. But manual fault-grade tests are brittle and break when features/logic change. And rearchitecting/designing products for at-speed ATPG is a non-trivial solution to changing your project management.
Of course, certain phases could always be compromised and still be task-based … but then, maybe you are more agile, but only 95% done …
Sorry to be contrary … I love agile when it works (absolute bright spots in my career), but some of it is harder than it sounds. Here’s a blog post about why some agile adoptions fail.
Mind-sets are hard to change, and once you do, you’ve changed the entire game.
I highly approve of contrary opinions so no need for ‘sorry’!
While switching from tasks to features is a significant difference, it’s far from all or nothing. This was the first step I made in agile about 5 years ago (functional verification completed as features). Not quite incremental development but let’s call it incremental verification. baby steps :). I did it by myself so there was no wide roll out or risk to others. I also had a manager that was receptive which helped. I didn’t do any other practices at the time. using that 1 practice made me more productive and was encouragement to pick up others. Since then, I’ve added TDD. Now I would say incremental development and TDD are the 2 practices I can’t work without. I do them by myself or in small teams with others (that have seen the 90% myth in action) and indeed it causes ripples that affect how you work. But nothing about what I’ve done feels like all or nothing… but I also was very careful not to smack people in the face with it :).
Now roll out incremental development with a multi-disciplinary team of 10 and you maybe end up with a different story. Who knows. But there’s no need for that if the conditions aren’t right. it’s the type of thing you can work your way up to.
Oh, so I did misunderstand. I thought you were talking about feature-based more like the Agile-software context, as in “shippable features”. So as soon I saw “Write RTL blocks”, I assumed you were suggesting that the product features should be developed/verified serially, which is a big difference from what I’ve seen.
I have done feature-based verification the last couple of years. From incremental testbench “feature” development to iterative coverage target “tasks”. The terminology gets a little confusing but I think your point is like Scrum’s “it’s not done until it’s done” tenet. Namely, be clear about what’s going to be accomplished and be clear about how “done” will be measured. On my last project, the verification and RTL teams (about 6-8 developers) that were doing Scrum (or at least Scrum-but), were more predictable, more productive and producing higher quality results than the other teams. It’s definitely my preferred modus operandi. And I would love to find a place that takes it to the next level.