Guest Blog: Demo Driven Technology Projects

By: Rolf V. Ostergaard

Far too often technology projects leads to a lot of fun work with very low productivity – and less useful results. A demo-driven scrum-like approach is a good way to fix that.

A technology project is…

Let me start by describing what I term a “technology project”. Most development is directly focused on building a product. Often with a lot of focus on cost price, schedule, manufacturability etc. Risk is avoided and solutions that fit the cost and schedule with low risk are preferred.

A technology project is intended to take more risk, do something new, and do this in a setting where the development of new technology is the goal. This is done outside of product development projects, to make the risk acceptable. Done right this actually reduces the risk of one or more following projects, where the technology can be used in new products.

Most engineers love technology projects. For the same reasons, managers often use them to create variation in the work for the engineers. It’s like a treat for engineers. Sounds a bit nerdy, I know. But it works.

Show, don’t tell

The problem with many technology projects is that they can feel less important than “real” product development. But the contrary is true, they are more important. Without technology projects, the innovation suffers and new revolutionary products become to risky to do.

Our solution to this is a scum-like approach with demos. Each 2-3-4 week iteration, should demonstrate something. Preferably physical and preferably progress. The mantra here is “show, don’t tell”.

Sometimes it requires a lot of creativity to “show” something that can’t be build as a physical object yet. But eventually it gets there and the demos become more and more interesting.

At other times the progress is negative – we are in a dead end and need to dig ourselves out again. That’s good too, because the cost of doing this in a technology project is much lower than in a tight product development project. That is risk reduction at work.

We would prefer to always hold the demo, regardless of the results. “Negative progress” requires more understanding from all stakeholders – and a demo is really a good way of getting everybody to understand.

If you read about Scrum and demos, you will hear a lot about how important it is to demo the real working product in a realistic user setting. I think this is besides the point here. For projects involving hardware, software, FPGA/ASIC, mechanical and other aspects, this may be completely unrealistic. Making a demo – any demo – is however always possible. And this is much better than not trying. More about making a demo, any demo will be discussed in a subsequent entry.

Get feedback

Part of the purpose of a demo is collecting feedback. Both from other engineers on ideas for improvement, potential risks, things to try, test conditions to consider and from the other stakeholders on where the value is, what would improve the solution etc.

Most technology projects tend to have one or two applications in future products as their primary targets. Holding regular demos is a also a really good way to get other application ideas identified as well. When a broader group get to see what can be done with this technology, feedback in the form of “Could this be used for…?” is inevitable. This is a good catalyst for further innovation.

Is this Agile?

This method of making progress very visible and forcing easy to understand demos throughout a project helps in many ways. Does it make the project more agile? I believe so in many ways, but that may not be the primary reason for doing it. The more important reasons is to keep focus on the projects actual results and to make the project more visible.

A very nice side benefit comes from the way motivation works. If you are doing something that matters to others (and hell yes, we want to see a demo – anytime!), you are immediately much more motivated. Motivation improves productivity dramatically.

All in all this demo-driven approach to technology projects achieves three good things:

  • Productivity is boosted through the motivation to do good demos
  • Project visibility is increased, which also helps spread the knowledge
  • Innovation is regularly inspired by the technology demos

Hopefully this served as a bit of inspiration on how to improve outcome, productivity and motivation in these traditionally difficult technology projects.

Q: How do you make technology projects visible and productive in your organisation?

Rolf V. Ostergaard is a M.Sc.EE from Denmark, who got his entrepreneurial inspiration by working in Silicon Valley back in the dot-com days. He co-founded the consulting business Axcon in 2004 and grew it to 20+ people focused on improving development of embedded systems – hardware, software, FPGA – the guts of smart devices. Rolf is specialised in signal integrity and enjoys doing training and consulting to fix SI problems before they occur. Find him blogging on www.axcon.dk/blog and as @rolfostergaard on Twitter.

When Done Actually Means DONE

In presentations I’ve given on agile hardware development, there’s one slide I have that seems to get the point across better than any other as far as how agile development differs from the waterfall-like process a lot of hardware teams follow. I’ve used it a few times before and I find myself counting on it again for my talk at Agile2011 in Salt Lake City.

In prior talks, I’ve built up to this slide with a verbal contrast between waterfall and agile. I talk about waterfall development as a sequential process that culminates with one big bang deliverable. Progress is tracked against a master schedule and based on what pre-defined tasks have been completed. The lessons learned at the end of a project are applied to the next project (i.e. you have a chance to improve once/project). I don’t claim that waterfall is exactly what hardware teams do, but it’s close’ish and the task based planning certainly does widely apply.

The agile model, on the other hand, is an iterative process where each iteration culminates in production ready code equivalent to some subset of the end product, meaning you can quit and release whenever you and your customer want. Progress is measured based on which features are complete at a point in time and lessons from each iteration are applied to the next (i.e. you have a chance to improve several times/project).

These are obvious differences, but the underlying message is never quite apparent until we get to the visual. That’s where people can see that the critical difference is in the definition of DONE.

Let’s say a team following a waterfall model is half done building an XYZ, which to the team means their design documentation is done, the design has been coded and they’ve built some of the verification infrastructure. Lot’s of good stuff there, but because the team has based its progress on predefined tasks they’ve completed as opposed to what they know actually works, half done can be pretty misleading. The documentation they have could be severely out-of-date, the design is probably somewhere between buggy and dead in the water and the verification environment might not even compile yet. Needless to say, the second half of the project is going to take much longer than the first half did!

Contrast that with the the definition of done in the agile model. Here, progress is based on features. When a team says “we’re half done”, they mean it. With little warning, they could release a product with half of the intended functionality. They know it’s done because it’s been tested, it’s been through the back-end, the software has been written and it’s ready to go.

Two different ways to measure progress; two very different meanings for the word DONE. To me, its this visual contrast with the waterfall and a redefinition of what it means to be DONE that helps the value of agile development really stand out.

neil

Q. How do you measure ‘done’? Code that’s written or code that’s production ready?

The New ‘A’ in EDA

I’m sure the suspense is killing you so I’ll just come out with it. EDA now stands for ‘Electronic Design Agility’. ‘Electronic Design Automation’ is gone forever.

That’s official. There’s no going back. Here’s why…

The huge number of variables, unknowns and risks that have become part of modern SoC development create multi-(multi-)dimensional problems so complex that forming a viable solution through planning and analysis alone is simply not possible. The right solution can only come out of an agile, iterative design process that relies on feedback cycles connecting developers to each other, the development team to the fab, the hardware experts to the software experts and most importantly, the development team to its customer.

It was heartening to read support for the idea that agile is part of the future of EDA in a post by Richard Goering from #48DAC in San Diego. In a post covering a keynote talk by Gadi Singer [Note: I originally had ‘interview with Gadi Singer’ here but Richard pointed out that it was in fact coverage of his DAC keynote. Richard: Thanks for setting me straight!], vice president and general manager of Intel’s SoC Enabling Group, Singer mentions agile methods as being part of an ‘Immenent EDA Transformation’:

“This is about having lots of feedback loops starting very early,” Singer said. “For those of you familiar with software methodology, it is about agile practices, not waterfall practices. With agile, you assume you have to make iterations, and every time you learn more you make changes. You have learning and validation cycles all through the design.”

Obviously I wouldn’t be here if I didn’t agree with that opinion but just because one person says it’s so doesn’t make it so. To get to where I am now, I had to challenge myself with a few questions to convince myself of the possibilities…

  • When was the last time we finished a project on time and on budget when all the planning, estimating and scheduling were done up front?

OK, for a lot of people I could just stop there. I have no stats to back it up, but I’ve posed that question a few times asking for a quick show of hands. I never see that many go up so I’m convinced it’s rare. That’s the red flag that tells me something is just plain wrong with how we go about getting things done in hardware development.

In case you need a few more red flags…

  • When design documentation so rarely represents a completed design, what sense does it make to finish it before I start writing code?
  • If I’ve never accurately predicted the 3-day task I’ll be doing 4 months from thursday, why would I continue to build detailed schedules that go out 6 months or longer?
  • How can I say code is done before I’ve seen it work properly?
  • Why is feature creep soooo bad? Do we really think that the features we start with will be what the market needs 18 or 24 months down the road?
  • If design specifications are so long and difficult to comprehend (assuming people actually read them in the first place), why do we keep writing them the way we do now?
  • If the day actually does come when I am hit by a bus, is my test plan really enough to tell someone exactly what I’m doing? Or would it be better to just show people what I’m doing as we go along?
  • Why wouldn’t we test, integrate and release code in small batches instead of doing everything at once?
  • What good comes from sheltering the development team from its customer? Wouldn’t it be better to hear about their problems first hand?

A lot of what we do from a process point-of-view just doesn’t make sense anymore so if you find yourself nodding your head as you read any of these, you’re ready to become part of the imminent transformation Gadi Singer predicts.

For a long time, electronic design automation has been used to describe an entire world of tools that we use daily in semiconductor development. But the paradigm that revolves around automation as the solution to modern design problems is severely out-dated. Being agile and having the ability to react to situations we can’t possibly predict has, without a doubt, become more important than automating processes we can predict. It’s time to update the paradigm accordingly.

So there it is… ‘Electronic Design Agility‘ and the re-invention of EDA.

Like I said, that’s official and there’s no going back. It’s time to be agile!

neil

Q. What outdated practices have we been clinging to in hardware development even after they’ve repeatedly failed us?

Taking Inventory For An SoC Team

What counts as inventory for an SoC team? Here’s an excerpt from the latest update of the XtremeEDA Professional Development Series where Aimee Sutton and I lay-out an exercise that helps SoC teams find out…

A large part of building a sound [development] strategy is understanding what the team is starting with. This is called a team’s inventory. People may think of inventory as tools, licenses, compute infrastructure and other resources but it’s more than that. Inventory includes people: their personalities, skill-sets, experience, history, and relationships. It even includes baggage from previous projects that could negatively affect the team’s future work. Inventory is a snapshot of every item and characteristic that helps/hinders the progress of the team.

Suggesting exercises like Taking Inventory was my way of nudging SoC teams toward the practice of doing regular retrospectives, as it typically done in agile development. We present it as a kick-start for a verification effort, but the scope could easily be opened up to include the rest of rest of the team. I’m hoping that if teams find an exercise like this useful, that they take it the step further and do it regularly.

If you’re interested in reading more about it, you can find the whole entry here.

neil

Q. What do you think of regularly Taking Inventory and/or running retrospectives in general? Would it make your team better or just use up time?

Remote Developers And The Feature-of-the-week

In all the discussions I’ve had regarding agile and from all the presentations I’ve seen, articles I’ve read, etc, the most valuable thing I’ve heard so far has been what I’ve been calling the feature-of-the-week.

The feature-of-the-week was something I picked up at the very first APLN (Agile Project Leadership Network) group meeting I attended here in Calgary. The presentation was given by a fellow named Jonathan Rasmusson who is a Calgary-based agile coach and software developer (he’s got a blog here). Jonathan was giving an introductory level presentation to the group and talked quite a bit about how to get started. The best advice he had (which is also the best I’ve heard so far) was “don’t get caught up in trying to define agile for a team as part of some big master plan or organizational overhaul… just tell someone you’re going to deliver something that works in a week, then do it. That’s agile”.

I’m not sure if Jonathan actually said feature-of-the-week that day or if I imagined it afterward. If he did say it, I don’t know if he’s the one that actually coined the term, but it doesn’t matter. I credit him for the simplest yet most useful thing I’ve heard so far regarding agile methods in hardware development.

That was an important day for me because I do most of my work with clients remotely, which means I work as part of a development team but instead of being on the other side of the cube wall, I’m on the other side of a phone call. Remote development is not an ideal arrangement but it can work quite well. The communication barriers are obviously higher and I’ve seen that make things difficult. I’ve come to wonder, though, whether its the distance between remote developers that makes things difficult or the way we work and how we measure progress that presents the real difficulties. It was my first time attempting the feature-of-the-week that got me wondering.

In case I haven’t made it obvious, the feature-of-the-week entails writing, testing and releasing code for some feature in a week or less. Not just writing or testing some chunk of code, writing and testing a feature then releasing it to the rest of the team, all in a week or less! A week was a fast turnaround for me since I was used to writing a few weeks worth of code, then coming back to see if it worked.

The first time I tried this with a client, it was obvious that I was committing to a focused and intense approach to development. I was onsite for a week getting acquainted with the project and while onsite, I committed to a delivery from home the following friday. Granted, it was a pretty small delivery but I didn’t realize how hard it would be until I sat down monday and glanced at the calendar. Friday was only 5 days away and I had a lot of work to do.

I’ll spare you all the details, but in a nutshell what started as a series of daunting delivery milestones ended up being the best thing I’ve done as a verification engineer. With the ability to demonstrate my work every 5-10 days (yes… the feature-of-the-week ended up being the feature-of-the-week-or-2) I could show that either:

a) I knew exactly what I was doing; or

b) I was out in left field and in need of help.

Demonstrating – as opposed to telling – is key and the purpose of that wasn’t to prove I was actually working on the other end of the VPN, it was to close a tight feedback loop with the designer. I could ask a question about a feature, code the answer immediately, run it and ask for confirmation soon after (i.e. “take a look at this sim… is that what you meant Tuesday when you said <blah blah blah>?”).

That tight feedback loop kept me on track. I’ll admit to having a few misses, but I was able to recover quickly every time. To understand how the quick recovery was possible, try debugging 5 days worth of code and then 2 months worth of code. Which is easier?

From that experience, these are the lessons I learned:

  • Describing “lines/classes/modules/tests written in a status report” will never be as reliable as “code done and demonstrated”
  • Short tasks are easier to estimate and complete on time than long tasks
  • 1 week screw-ups are easier to fix than 2 month screw-ups
  • As a remote developer, regularly releasing code to the rest of the team is the only way to show them you know what you’re doing.

The last lesson was most important. I’m now convinced that perceived productivity limitations of remote developers aren’t in fact caused by distance alone. They are a product of the way people work together and how they measure progress. Simply put, the feature-of-the-week with it’s weekly deliveries, tight feedback loop and increased transparency made me more productive as a remote contributor (if you want more details, you can tune into an open discussion on our AgileSoC Linkedin group).

A valuable lesson from a very simple idea: telling someone you’re going to deliver something in a week… and then actually doing it!

neil

Q. How long do you write code before you test and release it?

Q. How do you communicate progress as a remote developer? Status reports or passing tests?

Enough Already About Collaborating With The Fab!

In the last 2+ years that I’ve dedicated to applying agile methods to hardware development, a big part of my focus has been on using agile to bring design, verification and software developers closer together. In my opinion, we have room for improvement in that area. From the beginning, I’ve seen incremental development as being the key for improvement because it pulls experts together, forcing them to continuously prioritize and exercise what they plan to deliver instead of hunkering down in their cubes and hoping things come together at the end.

But with all the effort I’ve put into this, I’m starting to wonder who else is thinking the same way. Is a lack of meaningful collaboration a problem in SoC development or am I seeing a problem that doesn’t actually exist? I’m starting to question my observations – or imagination perhaps – for a few different reasons.

The big one for me lately has been all the effort dedicated to increasing collaboration between design house, EDA and fab. Now I’m sure the value there is huge, but so much emphasis on collaboration between design house and fab, to me, insinuates that this next level of collaboration is a natural extension of what is already a highly collaborative environment within the design house. Is that true? Are cohesive, collaborative teams and shared priorities the norm in SoC development? Or, for example, are design and verification sub-teams formed and insulated from each by ambiguous product specifications and bug tracking databases as well as independent priorities, scheduling, and reporting structure?

It’s also easy to notice all the attention being paid to enabling early software development as software becomes an increasingly dominant component of an SoC. That’s certainly been propelling innovation in ESL design not to mention emulation and hardware acceleration. But in focusing on those areas, is it being suggested that pulling in software start dates is the missing link to getting successful product out the door? What about the fact that hardware and software tend to be treated as completely independent deliverables? Or that hardware and software development for the same SoC may be controlled by 2 separate groups within the same organization? Do early start dates compensate for that kind of deep rooted disconnect?

Of course it’s easy to generalize. Not all teams are in the same boat with respect to how they work together. And I’m certainly not suggesting a culture of bickering and infighting. That’s not the point of this at all because that’s something I don’t see. My points relate to the organizational and operational levels and on those levels there are characteristics that SoC teams exhibit almost universally. Splitting into independent functional sub-teams is one example (modeling/architecture, design, verification, software, implementation, validation, etc). A preference for working toward a single big-bang deliverable is another tendency. Software and hardware teams that are separated organizational is yet another. The list goes on.

The details and extent obviously vary team-by-team but I don’t think I’m making this stuff up. I reckon there are significant gains to be made through a critical look at and restructuring of SoC development teams in the name of stronger collaboration. Take the time to question the barriers you put up between design, verification, software and everyone else that contributes to delivery. Imagine how regular shared goals and an agile approach to development can help break these barriers. If you’re wondering what agile SoC development could look like, you can read an article I co-authored with Bryan Morris called Agile Transformation in IC Development to find out.

And of course…

Despite what the title says, continue to pay attention to collaboration with EDA and fab. Continue to invest in ESL and emulation as a means of expediting software development. I don’t want to downplay either of those because they deserve the attention they’re getting. Just don’t forget to mix in a little time for some other things that are just as important.

neil

Q. What are your thoughts on SoC team structure and how we develop and deliver products? Are we good the way we are or are we due for a change?

Do Agile And Hardware Emulation Mix?

That’s a question I’ve asked myself several times in the last couple years but have never got around to answering it, not because I don’t think it belongs but because I don’t have much experience with it.

My opinion: I think emulation belongs in agile hardware development for a few reasons… and I’m choosing to ignore the “faster than simulation” advantage that always comes with emulation. These are better reasons to like emulation:

  • It can be used to encourage application level thought
  • An emulator is a practical environment for hardware/software co-development
  • It provides a platform for connecting real peripherals and test equipment
  • Real peripherals and test equipment provide a better “visual” during customer and stakeholder demos

I think the real value of emulation is in these bullets that focus on the application level view of a piece of hardware, the platform it provides for software developers and how a product is used in a real system (or at least real’ish). If I’m right, then I’d like to think the question becomes how do you get to an emulator as soon as possible so you can start realizing this value. I have a feeling this is where agile comes in… incremental development specifically as well as everything it takes for great x-functional teamwork.

I have my thoughts on how teams get started with agile development using agile development using simulation… I’d like to hear how others think it could work with emulation.

neil

Q. Are you an emulation expert that has seen the path of least resistance when it comes to getting product deployed to an emulation platform? What does it look like?

Regular Delivery… It’s Part Of Being Agile

In a post a few weeks ago I talked about how regular delivery wasn’t actually the point of being agile in hardware development. While the software guys can deliver on a weekly or monthly basis, taping out an ASIC every week doesn’t make much sense. Obviously. But even though regular delivery makes no sense for an ASIC doesn’t mean agile isn’t applicable. The value of agile in ASIC development is the ability to demonstrate progress. I believed that when I wrote it and I believe it now.

So why the contradiction in this post? Why am I now saying regularly delivery should be part of agile hardware development? While it might not make sense for ASICs, regular deliver is entirely possible with an FPGA or an IP block.

Before you shake your head, think about why agile software teams deliver product regularly. The goal for any software team is to satisfy customer need. Sounds simple enough except that sometimes customers don’t know what they need… so they ask for what they think they need or worse, what they want. If you (the development team) disappear for, say 9 months and then hit them with finished product, there’s a decent chance someone ends up being disappointed.

Development team: “All done! This is exactly what you asked for in this giant ambiguous requirements spec! I hope you love it!”

Customer: “I don’t love it because that’s not what I asked for.”… or… “Hmmm, I guess that’s what I asked for but what am I supposed to do with it.”… or… “Super, but I don’t need that anymore”.

Experience has shown agile software teams that keeping your customer involved in development helps avoid delivering a disappointment. Involved doesn’t mean telling, it means showing. Showing customers product as it’s being built helps them identify what they need and prioritize what they might not need. This works in software. There’s no reason to think it couldn’t work for an FPGA or an IP block.

“You mean we should be able to ship an FPGA load or an IP block every few weeks/months?”

Right.

“Even though it isn’t done?”

Yup.

“But we’d have to change our whole delivery process or we’d end up wasting all our time on packaging the thing instead of building it.”

Correct.

“…and we’d need automated regressions…”

Uh huh.

“…and we’d have to help them with integration so they know how to use it…”

Right again.

“…and we’d need user guides and documentation in place…”

Correct.

“…we’d probably need a reference design, maybe even a board ready way before we’d usually have one…”

Definitely.

[pause]

“…but what if there’s a problem with what we’ve built?”

That’s kind of the point.

With an ASIC, it’s not possible to incrementally deliver the real thing but there is still value in hardware simulation/co-simulation demos, emulation demos, intermediate size and power calculations, etc. It’s not perfect but it’s better than going the distance, sharing progress through status reports and hand waiving only to find disappointment when you’re done.

With FPGAs and IP it’s different. Regularly delivering product to your customer even though it isn’t done could end up being an invaluable learning experience. And unlike an ASIC, it’s entirely reasonable. Yes it could require major changes to how you develop and package your product but the customer feedback could be worth your while. Even if you only do it once, think of it as working out the kinks so there’s a better chance of bug free delivery when you are done.

Better to discover that what you’re building isn’t quite what your customers need in a month or 2 than funding a massive development effort and finding out months later.

Possible? Not possible? Ridiculous? Let me know what you think!

neil

Agile Hardware Starts As A Steel Thread

For me, a key to agile is incremental development. Most software developers reading this will probably say “duh… no kidding” but it’s a new concept to hardware folks.

If you’re new to agile, incremental development is something I’ve talked about in several articles on AgileSoC.com. In a nutshell, it’s where product features are designed, implemented, tested and ready to go in small batches. Products start small but functional then grow in scope until they are delivered (they can also be delivered *as* they grow but I’m not going there today).

Because most hardware teams are used to developing products as big bang feature sets, incremental development can be a big step. To help teams get started, I put together an article called Operation Basic Sanity: A Faster Way To Sane Hardware that spells out how a team can deliver a small batch of features equivalent to a functionally sane device. That article was actually inspired by an exercise I called the “Steel Thread Challenge”.

Steel Thread is a term I’ve seen to describe some minimal thread of functionality through a product. I think about it as being able to <do something simple> to <some input> so that <something basic> happens on <some output>. As a hardware guy, a steel thread seems synonymous with a sane design. It’s small relative to what remains but significant in that you know the design is doing something right.

The Steel Thread Challenge: How-to

What You Need: The Steel Thread Challenge is a retrospective exercise that works back from a finished product. Choose a design that you’ve just finished or is at least well into development. You’ll also need a conference room with some whiteboard space.

Who You Need: You’ll focus on front-end development so include designers, verification engineers and modeling experts as well as the system architects and managers.

Step 1: Describe the product: On a whiteboard, draw a block diagram that includes the design and test environment. You should end up with something like this (except the blocks should be labelled)…

Step 2: Find the steel thread: Decide as a group what constitutes a steel thread (HINT: it should be a simple function that provides some tangible outcome). Identify the main processing path of the Steel Thread by drawing a line through your block diagram. That should get you to this point…

Step 3: Remove everything you don’t need: The goal is to find the smallest code footprint that supports the Steel Thread. By analyzing how your Steel Thread travels through the design and test environment, erase everything that isn’t needed (best to take a picture of what you have so you can redraw it if necessary!). First erase entire blocks if you can. If logic can be moved or simplified to remove entire blocks, make a list of necessary simplifications and then erase those blocks. From the blocks that remain, make a list of the features that aren’t necessary and could be ripped out. That should leave you with a list of what’s been removed and a diagram like this…

Step 4: Planning for the steel thread: Now the “challenge” part of the Steel Thread Challenge. This is a mock planning exercise where as a group you discuss how you would build, test and deliver a Steel Thread starting from a clean slate. Pretend your Steel Thread is your product and you have to deliver it asap. How would you get there and how would it be different from what you normally do?

  • would the code you write be any different than usual?
  • would teamwork and/or organization be the same or different?
  • what would your schedule look like?
  • what would your documentation look like?
  • would your task assignments be any different than normal?
  • how long would it take to deliver the steel thread?
  • is there planning that could be delayed relative to what you normally do?

If you and your team try this exercise, I’d like to hear how it goes. If you have other ideas for helping people jump the “big bang to incremental development” hurdle, I’d like to hear that also! This will be a big step for many teams so the more options we have, the better!

Neil

Regular Delivery Isn’t Really The Point

Living the role of agile hardware development advocate has been a learning process for me. Obviously, I’ve learned a lot in terms of agile development process but that’s not all. I’ve also had a chance to speak with people seeing new ideas for the first time. That’s been very interesting for me.

Through talking with people, I learned quickly that people sit in 1 of 2 camps when you present the basics of agile hardware development. The first camp responds with “what a great idea… this makes a lot of sense”. The second camp responds with “this doesn’t make any sense at all… how the heck am I supposed to tape-out an ASIC every week/month/quarter?”.

To the first: I enjoy the long discussions we have. I’d say great ideas, opinions and concerns flow both ways. There’s always some skepticism (which there absolutely should be) but there’s a level of acceptance and that’s been encouraging.

To the second: this may seem a little odd… but discussions with you guys have become more valuable to me. I’ve found that a great way to learn is to have my opinions methodically dismantled by an all out pessimist. I get to hear all the reasons why I’m wrong in one short burst. Awesome! The argument against usually starts with how product and delivery are too different between software and hardware so agile can’t work. They might be able to deploy product weekly, but us taping out an ASIC several times is entirely ridiculous.

That discussion can go further but it’s the “product and delivery are too different” argument that allows people to dismiss agile out-of-hand.

Conveniently, I recently found that I’m not the only one bumping into that argument. In this article just published on eetimes (which is a good intro level article and not just for embedded software developers), James Grenning makes a very good point related to development and delivery of embedded software. It seems some in the embedded software world are using the same argument to dismiss the potential of agile:

Because value cannot be delivered each iteration some say that agile cannot be used on embedded software development. My opinion is different. Instead of delivering value, we provide visible progress. I don’t mean doing show and tell on what might be build, but rather a demonstration of real working running software.

Substitute “hardware” for “software” in that quote and I think it’s a decent response to the “product and delivery are too different” argument. Of course it’s absurd to think an ASIC should be delivered every week or every month. In fact that argument is so valid that it’s silly to get caught up discussing it. Instead, and just as James notes for embedded software, the potential for agile in hardware comes from regularly demonstrating progress instead of just describing it in documentation and discussing it in meetings.

Regularly demonstrating progress is where the discussion of agile in hardware development should be starting, regardless of what camp you’re in.

Neil