When Done Actually Means DONE

In presentations I’ve given on agile hardware development, there’s one slide I have that seems to get the point across better than any other as far as how agile development differs from the waterfall-like process a lot of hardware teams follow. I’ve used it a few times before and I find myself counting on it again for my talk at Agile2011 in Salt Lake City.

In prior talks, I’ve built up to this slide with a verbal contrast between waterfall and agile. I talk about waterfall development as a sequential process that culminates with one big bang deliverable. Progress is tracked against a master schedule and based on what pre-defined tasks have been completed. The lessons learned at the end of a project are applied to the next project (i.e. you have a chance to improve once/project). I don’t claim that waterfall is exactly what hardware teams do, but it’s close’ish and the task based planning certainly does widely apply.

The agile model, on the other hand, is an iterative process where each iteration culminates in production ready code equivalent to some subset of the end product, meaning you can quit and release whenever you and your customer want. Progress is measured based on which features are complete at a point in time and lessons from each iteration are applied to the next (i.e. you have a chance to improve several times/project).

These are obvious differences, but the underlying message is never quite apparent until we get to the visual. That’s where people can see that the critical difference is in the definition of DONE.

Let’s say a team following a waterfall model is half done building an XYZ, which to the team means their design documentation is done, the design has been coded and they’ve built some of the verification infrastructure. Lot’s of good stuff there, but because the team has based its progress on predefined tasks they’ve completed as opposed to what they know actually works, half done can be pretty misleading. The documentation they have could be severely out-of-date, the design is probably somewhere between buggy and dead in the water and the verification environment might not even compile yet. Needless to say, the second half of the project is going to take much longer than the first half did!

Contrast that with the the definition of done in the agile model. Here, progress is based on features. When a team says “we’re half done”, they mean it. With little warning, they could release a product with half of the intended functionality. They know it’s done because it’s been tested, it’s been through the back-end, the software has been written and it’s ready to go.

Two different ways to measure progress; two very different meanings for the word DONE. To me, its this visual contrast with the waterfall and a redefinition of what it means to be DONE that helps the value of agile development really stand out.


Q. How do you measure ‘done’? Code that’s written or code that’s production ready?

Remote Developers And The Feature-of-the-week

In all the discussions I’ve had regarding agile and from all the presentations I’ve seen, articles I’ve read, etc, the most valuable thing I’ve heard so far has been what I’ve been calling the feature-of-the-week.

The feature-of-the-week was something I picked up at the very first APLN (Agile Project Leadership Network) group meeting I attended here in Calgary. The presentation was given by a fellow named Jonathan Rasmusson who is a Calgary-based agile coach and software developer (he’s got a blog here). Jonathan was giving an introductory level presentation to the group and talked quite a bit about how to get started. The best advice he had (which is also the best I’ve heard so far) was “don’t get caught up in trying to define agile for a team as part of some big master plan or organizational overhaul… just tell someone you’re going to deliver something that works in a week, then do it. That’s agile”.

I’m not sure if Jonathan actually said feature-of-the-week that day or if I imagined it afterward. If he did say it, I don’t know if he’s the one that actually coined the term, but it doesn’t matter. I credit him for the simplest yet most useful thing I’ve heard so far regarding agile methods in hardware development.

That was an important day for me because I do most of my work with clients remotely, which means I work as part of a development team but instead of being on the other side of the cube wall, I’m on the other side of a phone call. Remote development is not an ideal arrangement but it can work quite well. The communication barriers are obviously higher and I’ve seen that make things difficult. I’ve come to wonder, though, whether its the distance between remote developers that makes things difficult or the way we work and how we measure progress that presents the real difficulties. It was my first time attempting the feature-of-the-week that got me wondering.

In case I haven’t made it obvious, the feature-of-the-week entails writing, testing and releasing code for some feature in a week or less. Not just writing or testing some chunk of code, writing and testing a feature then releasing it to the rest of the team, all in a week or less! A week was a fast turnaround for me since I was used to writing a few weeks worth of code, then coming back to see if it worked.

The first time I tried this with a client, it was obvious that I was committing to a focused and intense approach to development. I was onsite for a week getting acquainted with the project and while onsite, I committed to a delivery from home the following friday. Granted, it was a pretty small delivery but I didn’t realize how hard it would be until I sat down monday and glanced at the calendar. Friday was only 5 days away and I had a lot of work to do.

I’ll spare you all the details, but in a nutshell what started as a series of daunting delivery milestones ended up being the best thing I’ve done as a verification engineer. With the ability to demonstrate my work every 5-10 days (yes… the feature-of-the-week ended up being the feature-of-the-week-or-2) I could show that either:

a) I knew exactly what I was doing; or

b) I was out in left field and in need of help.

Demonstrating – as opposed to telling – is key and the purpose of that wasn’t to prove I was actually working on the other end of the VPN, it was to close a tight feedback loop with the designer. I could ask a question about a feature, code the answer immediately, run it and ask for confirmation soon after (i.e. “take a look at this sim… is that what you meant Tuesday when you said <blah blah blah>?”).

That tight feedback loop kept me on track. I’ll admit to having a few misses, but I was able to recover quickly every time. To understand how the quick recovery was possible, try debugging 5 days worth of code and then 2 months worth of code. Which is easier?

From that experience, these are the lessons I learned:

  • Describing “lines/classes/modules/tests written in a status report” will never be as reliable as “code done and demonstrated”
  • Short tasks are easier to estimate and complete on time than long tasks
  • 1 week screw-ups are easier to fix than 2 month screw-ups
  • As a remote developer, regularly releasing code to the rest of the team is the only way to show them you know what you’re doing.

The last lesson was most important. I’m now convinced that perceived productivity limitations of remote developers aren’t in fact caused by distance alone. They are a product of the way people work together and how they measure progress. Simply put, the feature-of-the-week with it’s weekly deliveries, tight feedback loop and increased transparency made me more productive as a remote contributor (if you want more details, you can tune into an open discussion on our AgileSoC Linkedin group).

A valuable lesson from a very simple idea: telling someone you’re going to deliver something in a week… and then actually doing it!


Q. How long do you write code before you test and release it?

Q. How do you communicate progress as a remote developer? Status reports or passing tests?

Enough Already About Collaborating With The Fab!

In the last 2+ years that I’ve dedicated to applying agile methods to hardware development, a big part of my focus has been on using agile to bring design, verification and software developers closer together. In my opinion, we have room for improvement in that area. From the beginning, I’ve seen incremental development as being the key for improvement because it pulls experts together, forcing them to continuously prioritize and exercise what they plan to deliver instead of hunkering down in their cubes and hoping things come together at the end.

But with all the effort I’ve put into this, I’m starting to wonder who else is thinking the same way. Is a lack of meaningful collaboration a problem in SoC development or am I seeing a problem that doesn’t actually exist? I’m starting to question my observations – or imagination perhaps – for a few different reasons.

The big one for me lately has been all the effort dedicated to increasing collaboration between design house, EDA and fab. Now I’m sure the value there is huge, but so much emphasis on collaboration between design house and fab, to me, insinuates that this next level of collaboration is a natural extension of what is already a highly collaborative environment within the design house. Is that true? Are cohesive, collaborative teams and shared priorities the norm in SoC development? Or, for example, are design and verification sub-teams formed and insulated from each by ambiguous product specifications and bug tracking databases as well as independent priorities, scheduling, and reporting structure?

It’s also easy to notice all the attention being paid to enabling early software development as software becomes an increasingly dominant component of an SoC. That’s certainly been propelling innovation in ESL design not to mention emulation and hardware acceleration. But in focusing on those areas, is it being suggested that pulling in software start dates is the missing link to getting successful product out the door? What about the fact that hardware and software tend to be treated as completely independent deliverables? Or that hardware and software development for the same SoC may be controlled by 2 separate groups within the same organization? Do early start dates compensate for that kind of deep rooted disconnect?

Of course it’s easy to generalize. Not all teams are in the same boat with respect to how they work together. And I’m certainly not suggesting a culture of bickering and infighting. That’s not the point of this at all because that’s something I don’t see. My points relate to the organizational and operational levels and on those levels there are characteristics that SoC teams exhibit almost universally. Splitting into independent functional sub-teams is one example (modeling/architecture, design, verification, software, implementation, validation, etc). A preference for working toward a single big-bang deliverable is another tendency. Software and hardware teams that are separated organizational is yet another. The list goes on.

The details and extent obviously vary team-by-team but I don’t think I’m making this stuff up. I reckon there are significant gains to be made through a critical look at and restructuring of SoC development teams in the name of stronger collaboration. Take the time to question the barriers you put up between design, verification, software and everyone else that contributes to delivery. Imagine how regular shared goals and an agile approach to development can help break these barriers. If you’re wondering what agile SoC development could look like, you can read an article I co-authored with Bryan Morris called Agile Transformation in IC Development to find out.

And of course…

Despite what the title says, continue to pay attention to collaboration with EDA and fab. Continue to invest in ESL and emulation as a means of expediting software development. I don’t want to downplay either of those because they deserve the attention they’re getting. Just don’t forget to mix in a little time for some other things that are just as important.


Q. What are your thoughts on SoC team structure and how we develop and deliver products? Are we good the way we are or are we due for a change?

Regular Delivery… It’s Part Of Being Agile

In a post a few weeks ago I talked about how regular delivery wasn’t actually the point of being agile in hardware development. While the software guys can deliver on a weekly or monthly basis, taping out an ASIC every week doesn’t make much sense. Obviously. But even though regular delivery makes no sense for an ASIC doesn’t mean agile isn’t applicable. The value of agile in ASIC development is the ability to demonstrate progress. I believed that when I wrote it and I believe it now.

So why the contradiction in this post? Why am I now saying regularly delivery should be part of agile hardware development? While it might not make sense for ASICs, regular deliver is entirely possible with an FPGA or an IP block.

Before you shake your head, think about why agile software teams deliver product regularly. The goal for any software team is to satisfy customer need. Sounds simple enough except that sometimes customers don’t know what they need… so they ask for what they think they need or worse, what they want. If you (the development team) disappear for, say 9 months and then hit them with finished product, there’s a decent chance someone ends up being disappointed.

Development team: “All done! This is exactly what you asked for in this giant ambiguous requirements spec! I hope you love it!”

Customer: “I don’t love it because that’s not what I asked for.”… or… “Hmmm, I guess that’s what I asked for but what am I supposed to do with it.”… or… “Super, but I don’t need that anymore”.

Experience has shown agile software teams that keeping your customer involved in development helps avoid delivering a disappointment. Involved doesn’t mean telling, it means showing. Showing customers product as it’s being built helps them identify what they need and prioritize what they might not need. This works in software. There’s no reason to think it couldn’t work for an FPGA or an IP block.

“You mean we should be able to ship an FPGA load or an IP block every few weeks/months?”


“Even though it isn’t done?”


“But we’d have to change our whole delivery process or we’d end up wasting all our time on packaging the thing instead of building it.”


“…and we’d need automated regressions…”

Uh huh.

“…and we’d have to help them with integration so they know how to use it…”

Right again.

“…and we’d need user guides and documentation in place…”


“…we’d probably need a reference design, maybe even a board ready way before we’d usually have one…”



“…but what if there’s a problem with what we’ve built?”

That’s kind of the point.

With an ASIC, it’s not possible to incrementally deliver the real thing but there is still value in hardware simulation/co-simulation demos, emulation demos, intermediate size and power calculations, etc. It’s not perfect but it’s better than going the distance, sharing progress through status reports and hand waiving only to find disappointment when you’re done.

With FPGAs and IP it’s different. Regularly delivering product to your customer even though it isn’t done could end up being an invaluable learning experience. And unlike an ASIC, it’s entirely reasonable. Yes it could require major changes to how you develop and package your product but the customer feedback could be worth your while. Even if you only do it once, think of it as working out the kinks so there’s a better chance of bug free delivery when you are done.

Better to discover that what you’re building isn’t quite what your customers need in a month or 2 than funding a massive development effort and finding out months later.

Possible? Not possible? Ridiculous? Let me know what you think!


Agile Hardware Starts As A Steel Thread

For me, a key to agile is incremental development. Most software developers reading this will probably say “duh… no kidding” but it’s a new concept to hardware folks.

If you’re new to agile, incremental development is something I’ve talked about in several articles on AgileSoC.com. In a nutshell, it’s where product features are designed, implemented, tested and ready to go in small batches. Products start small but functional then grow in scope until they are delivered (they can also be delivered *as* they grow but I’m not going there today).

Because most hardware teams are used to developing products as big bang feature sets, incremental development can be a big step. To help teams get started, I put together an article called Operation Basic Sanity: A Faster Way To Sane Hardware that spells out how a team can deliver a small batch of features equivalent to a functionally sane device. That article was actually inspired by an exercise I called the “Steel Thread Challenge”.

Steel Thread is a term I’ve seen to describe some minimal thread of functionality through a product. I think about it as being able to <do something simple> to <some input> so that <something basic> happens on <some output>. As a hardware guy, a steel thread seems synonymous with a sane design. It’s small relative to what remains but significant in that you know the design is doing something right.

The Steel Thread Challenge: How-to

What You Need: The Steel Thread Challenge is a retrospective exercise that works back from a finished product. Choose a design that you’ve just finished or is at least well into development. You’ll also need a conference room with some whiteboard space.

Who You Need: You’ll focus on front-end development so include designers, verification engineers and modeling experts as well as the system architects and managers.

Step 1: Describe the product: On a whiteboard, draw a block diagram that includes the design and test environment. You should end up with something like this (except the blocks should be labelled)…

Step 2: Find the steel thread: Decide as a group what constitutes a steel thread (HINT: it should be a simple function that provides some tangible outcome). Identify the main processing path of the Steel Thread by drawing a line through your block diagram. That should get you to this point…

Step 3: Remove everything you don’t need: The goal is to find the smallest code footprint that supports the Steel Thread. By analyzing how your Steel Thread travels through the design and test environment, erase everything that isn’t needed (best to take a picture of what you have so you can redraw it if necessary!). First erase entire blocks if you can. If logic can be moved or simplified to remove entire blocks, make a list of necessary simplifications and then erase those blocks. From the blocks that remain, make a list of the features that aren’t necessary and could be ripped out. That should leave you with a list of what’s been removed and a diagram like this…

Step 4: Planning for the steel thread: Now the “challenge” part of the Steel Thread Challenge. This is a mock planning exercise where as a group you discuss how you would build, test and deliver a Steel Thread starting from a clean slate. Pretend your Steel Thread is your product and you have to deliver it asap. How would you get there and how would it be different from what you normally do?

  • would the code you write be any different than usual?
  • would teamwork and/or organization be the same or different?
  • what would your schedule look like?
  • what would your documentation look like?
  • would your task assignments be any different than normal?
  • how long would it take to deliver the steel thread?
  • is there planning that could be delayed relative to what you normally do?

If you and your team try this exercise, I’d like to hear how it goes. If you have other ideas for helping people jump the “big bang to incremental development” hurdle, I’d like to hear that also! This will be a big step for many teams so the more options we have, the better!


Regular Delivery Isn’t Really The Point

Living the role of agile hardware development advocate has been a learning process for me. Obviously, I’ve learned a lot in terms of agile development process but that’s not all. I’ve also had a chance to speak with people seeing new ideas for the first time. That’s been very interesting for me.

Through talking with people, I learned quickly that people sit in 1 of 2 camps when you present the basics of agile hardware development. The first camp responds with “what a great idea… this makes a lot of sense”. The second camp responds with “this doesn’t make any sense at all… how the heck am I supposed to tape-out an ASIC every week/month/quarter?”.

To the first: I enjoy the long discussions we have. I’d say great ideas, opinions and concerns flow both ways. There’s always some skepticism (which there absolutely should be) but there’s a level of acceptance and that’s been encouraging.

To the second: this may seem a little odd… but discussions with you guys have become more valuable to me. I’ve found that a great way to learn is to have my opinions methodically dismantled by an all out pessimist. I get to hear all the reasons why I’m wrong in one short burst. Awesome! The argument against usually starts with how product and delivery are too different between software and hardware so agile can’t work. They might be able to deploy product weekly, but us taping out an ASIC several times is entirely ridiculous.

That discussion can go further but it’s the “product and delivery are too different” argument that allows people to dismiss agile out-of-hand.

Conveniently, I recently found that I’m not the only one bumping into that argument. In this article just published on eetimes (which is a good intro level article and not just for embedded software developers), James Grenning makes a very good point related to development and delivery of embedded software. It seems some in the embedded software world are using the same argument to dismiss the potential of agile:

Because value cannot be delivered each iteration some say that agile cannot be used on embedded software development. My opinion is different. Instead of delivering value, we provide visible progress. I don’t mean doing show and tell on what might be build, but rather a demonstration of real working running software.

Substitute “hardware” for “software” in that quote and I think it’s a decent response to the “product and delivery are too different” argument. Of course it’s absurd to think an ASIC should be delivered every week or every month. In fact that argument is so valid that it’s silly to get caught up discussing it. Instead, and just as James notes for embedded software, the potential for agile in hardware comes from regularly demonstrating progress instead of just describing it in documentation and discussing it in meetings.

Regularly demonstrating progress is where the discussion of agile in hardware development should be starting, regardless of what camp you’re in.