Are You Interested In Becoming An AgileSoC Contributor?

I’m starting to see people coming out of the woodwork. Hardware designers and verification engineers, modeling and architecture experts, embedded systems experts…there are people that are interested in seeing agile hardware development going mainstream. Lots of those same people have valuable experience to share.

What about you?

Do you have experience with agile hardware development that you’ve been wanting to tell the world about? Haven’t had the opportunity to do it yet? Well here’s your opportunity. If you’re interested in becoming a guest contributor… or even a regular contributor to the AgileSoC blog, let me know at


Taking Inventory For An SoC Team

What counts as inventory for an SoC team? Here’s an excerpt from the latest update of the XtremeEDA Professional Development Series where Aimee Sutton and I lay-out an exercise that helps SoC teams find out…

A large part of building a sound [development] strategy is understanding what the team is starting with. This is called a team’s inventory. People may think of inventory as tools, licenses, compute infrastructure and other resources but it’s more than that. Inventory includes people: their personalities, skill-sets, experience, history, and relationships. It even includes baggage from previous projects that could negatively affect the team’s future work. Inventory is a snapshot of every item and characteristic that helps/hinders the progress of the team.

Suggesting exercises like Taking Inventory was my way of nudging SoC teams toward the practice of doing regular retrospectives, as it typically done in agile development. We present it as a kick-start for a verification effort, but the scope could easily be opened up to include the rest of rest of the team. I’m hoping that if teams find an exercise like this useful, that they take it the step further and do it regularly.

If you’re interested in reading more about it, you can find the whole entry here.


Q. What do you think of regularly Taking Inventory and/or running retrospectives in general? Would it make your team better or just use up time?

Remote Developers And The Feature-of-the-week

In all the discussions I’ve had regarding agile and from all the presentations I’ve seen, articles I’ve read, etc, the most valuable thing I’ve heard so far has been what I’ve been calling the feature-of-the-week.

The feature-of-the-week was something I picked up at the very first APLN (Agile Project Leadership Network) group meeting I attended here in Calgary. The presentation was given by a fellow named Jonathan Rasmusson who is a Calgary-based agile coach and software developer (he’s got a blog here). Jonathan was giving an introductory level presentation to the group and talked quite a bit about how to get started. The best advice he had (which is also the best I’ve heard so far) was “don’t get caught up in trying to define agile for a team as part of some big master plan or organizational overhaul… just tell someone you’re going to deliver something that works in a week, then do it. That’s agile”.

I’m not sure if Jonathan actually said feature-of-the-week that day or if I imagined it afterward. If he did say it, I don’t know if he’s the one that actually coined the term, but it doesn’t matter. I credit him for the simplest yet most useful thing I’ve heard so far regarding agile methods in hardware development.

That was an important day for me because I do most of my work with clients remotely, which means I work as part of a development team but instead of being on the other side of the cube wall, I’m on the other side of a phone call. Remote development is not an ideal arrangement but it can work quite well. The communication barriers are obviously higher and I’ve seen that make things difficult. I’ve come to wonder, though, whether its the distance between remote developers that makes things difficult or the way we work and how we measure progress that presents the real difficulties. It was my first time attempting the feature-of-the-week that got me wondering.

In case I haven’t made it obvious, the feature-of-the-week entails writing, testing and releasing code for some feature in a week or less. Not just writing or testing some chunk of code, writing and testing a feature then releasing it to the rest of the team, all in a week or less! A week was a fast turnaround for me since I was used to writing a few weeks worth of code, then coming back to see if it worked.

The first time I tried this with a client, it was obvious that I was committing to a focused and intense approach to development. I was onsite for a week getting acquainted with the project and while onsite, I committed to a delivery from home the following friday. Granted, it was a pretty small delivery but I didn’t realize how hard it would be until I sat down monday and glanced at the calendar. Friday was only 5 days away and I had a lot of work to do.

I’ll spare you all the details, but in a nutshell what started as a series of daunting delivery milestones ended up being the best thing I’ve done as a verification engineer. With the ability to demonstrate my work every 5-10 days (yes… the feature-of-the-week ended up being the feature-of-the-week-or-2) I could show that either:

a) I knew exactly what I was doing; or

b) I was out in left field and in need of help.

Demonstrating – as opposed to telling – is key and the purpose of that wasn’t to prove I was actually working on the other end of the VPN, it was to close a tight feedback loop with the designer. I could ask a question about a feature, code the answer immediately, run it and ask for confirmation soon after (i.e. “take a look at this sim… is that what you meant Tuesday when you said <blah blah blah>?”).

That tight feedback loop kept me on track. I’ll admit to having a few misses, but I was able to recover quickly every time. To understand how the quick recovery was possible, try debugging 5 days worth of code and then 2 months worth of code. Which is easier?

From that experience, these are the lessons I learned:

  • Describing “lines/classes/modules/tests written in a status report” will never be as reliable as “code done and demonstrated”
  • Short tasks are easier to estimate and complete on time than long tasks
  • 1 week screw-ups are easier to fix than 2 month screw-ups
  • As a remote developer, regularly releasing code to the rest of the team is the only way to show them you know what you’re doing.

The last lesson was most important. I’m now convinced that perceived productivity limitations of remote developers aren’t in fact caused by distance alone. They are a product of the way people work together and how they measure progress. Simply put, the feature-of-the-week with it’s weekly deliveries, tight feedback loop and increased transparency made me more productive as a remote contributor (if you want more details, you can tune into an open discussion on our AgileSoC Linkedin group).

A valuable lesson from a very simple idea: telling someone you’re going to deliver something in a week… and then actually doing it!


Q. How long do you write code before you test and release it?

Q. How do you communicate progress as a remote developer? Status reports or passing tests?

You Can’t Automate Holistic Verification

For anyone stuck watching #48DAC via twitter (like I was) and presumably to those who were there in person, it was easy to feel that Cadence remains dedicated to the EDA360 vision it released over a year ago.

EDA360 has received a lot of attention. Some good… some bad. Put me in the camp that likes EDA360. I think it was an interesting move for Cadence, to share their view of the future so openly that is, thereby inviting criticism from anyone with a little time on their hands. I don’t mind that some of it qualifies as marketing hype and I don’t mind that it’s not entirely original. After filtering the hype, EDA360 is full of good stuff. I don’t even care if Cadence ends up delivering the vision exactly as stated through a new line of smart tools and by creating a collaborative ecosystem. I’d rather see them set the bar high and miss than the alternative.

But (isn’t there always a ‘but’)… the one section that has bugged me since I read it has been the one that talks about Holistic Verification. I’ve been pondering this for a while. At first, I liked the sound of it. Saying just those 2 words makes me think of a process where a team takes a step back from what they’ve always done so they can retool and rethink how they go about their business. Holistic makes it sound like every option is on the table. With a nice round view of what a team needs to accomplish, teammates build a strategy that is right for them and their product. They then arm themselves with the skills and tools they need to get it done.

To me, that is holistic verification.

The EDA360 view of holistic verification very close, yet very different. From page 24 of EDA360: The Way Forward for Electronic Design:

“Holistic verification—use the best tool for the job. There are many different techniques for digital verification, including simulation, formal verification, and emulation. Approaches to analog simulation range from transistor-level SPICEsimulation to analog behavioral modeling with the Verilog-AMS language. Working with the single goal of verifying the design intent, and utilizing a verification plan, EDA tools [the emphasis here is mine] must choose the best approach for any given phase of the verification process and feed this back to the verification plan. The result is a holistic approach to verification using the most productive methods for each task.”

It’s not too difficult to see where Cadence is going with this. They want to create smarter tools to offload as many monotonous tasks as possible, to allow teams to build test suites that are as comprehensive as possible as quickly as possible. They want to be an enabler for teams looking for more efficient ways to verify designs… which is pretty much every team I know of. I get that. I’m happy to see them (and other EDA vendors) do that. My problem though is the suggestion that “EDA tools must choose the best approach for the job” (I actually had to read it a few different times to realize what I missed at first). The tool driven decision making is something I have a problem with (for anyone that read why I think UVM Is Not A Methodology, that shouldn’t be a surprise).

I automatically question tools posing as solutions and that’s the feeling I get from EDA360 ‘holistic verification’. I hope an EDA360-like evolution is in the cards for the EDA industry as a whole, and I hope it leads to teams being able to automate everything possible save for one thing: thought. I’d prefer EDA companies leave that to their users.

To close, I appreciate Cadence setting the bar high and opening themselves up to public criticism, which at times has turned to ridicule. It was a gutsy move and I hope it pays off for them. I like EDA360, but here’s my bit of constructive criticism: Cadence (and others), please don’t attempt to automate holistic verification. Continue to build great tools, but leave the ‘holistic’ part to your users.


Enough Already About Collaborating With The Fab!

In the last 2+ years that I’ve dedicated to applying agile methods to hardware development, a big part of my focus has been on using agile to bring design, verification and software developers closer together. In my opinion, we have room for improvement in that area. From the beginning, I’ve seen incremental development as being the key for improvement because it pulls experts together, forcing them to continuously prioritize and exercise what they plan to deliver instead of hunkering down in their cubes and hoping things come together at the end.

But with all the effort I’ve put into this, I’m starting to wonder who else is thinking the same way. Is a lack of meaningful collaboration a problem in SoC development or am I seeing a problem that doesn’t actually exist? I’m starting to question my observations – or imagination perhaps – for a few different reasons.

The big one for me lately has been all the effort dedicated to increasing collaboration between design house, EDA and fab. Now I’m sure the value there is huge, but so much emphasis on collaboration between design house and fab, to me, insinuates that this next level of collaboration is a natural extension of what is already a highly collaborative environment within the design house. Is that true? Are cohesive, collaborative teams and shared priorities the norm in SoC development? Or, for example, are design and verification sub-teams formed and insulated from each by ambiguous product specifications and bug tracking databases as well as independent priorities, scheduling, and reporting structure?

It’s also easy to notice all the attention being paid to enabling early software development as software becomes an increasingly dominant component of an SoC. That’s certainly been propelling innovation in ESL design not to mention emulation and hardware acceleration. But in focusing on those areas, is it being suggested that pulling in software start dates is the missing link to getting successful product out the door? What about the fact that hardware and software tend to be treated as completely independent deliverables? Or that hardware and software development for the same SoC may be controlled by 2 separate groups within the same organization? Do early start dates compensate for that kind of deep rooted disconnect?

Of course it’s easy to generalize. Not all teams are in the same boat with respect to how they work together. And I’m certainly not suggesting a culture of bickering and infighting. That’s not the point of this at all because that’s something I don’t see. My points relate to the organizational and operational levels and on those levels there are characteristics that SoC teams exhibit almost universally. Splitting into independent functional sub-teams is one example (modeling/architecture, design, verification, software, implementation, validation, etc). A preference for working toward a single big-bang deliverable is another tendency. Software and hardware teams that are separated organizational is yet another. The list goes on.

The details and extent obviously vary team-by-team but I don’t think I’m making this stuff up. I reckon there are significant gains to be made through a critical look at and restructuring of SoC development teams in the name of stronger collaboration. Take the time to question the barriers you put up between design, verification, software and everyone else that contributes to delivery. Imagine how regular shared goals and an agile approach to development can help break these barriers. If you’re wondering what agile SoC development could look like, you can read an article I co-authored with Bryan Morris called Agile Transformation in IC Development to find out.

And of course…

Despite what the title says, continue to pay attention to collaboration with EDA and fab. Continue to invest in ESL and emulation as a means of expediting software development. I don’t want to downplay either of those because they deserve the attention they’re getting. Just don’t forget to mix in a little time for some other things that are just as important.


Q. What are your thoughts on SoC team structure and how we develop and deliver products? Are we good the way we are or are we due for a change?

Do Agile And Hardware Emulation Mix?

That’s a question I’ve asked myself several times in the last couple years but have never got around to answering it, not because I don’t think it belongs but because I don’t have much experience with it.

My opinion: I think emulation belongs in agile hardware development for a few reasons… and I’m choosing to ignore the “faster than simulation” advantage that always comes with emulation. These are better reasons to like emulation:

  • It can be used to encourage application level thought
  • An emulator is a practical environment for hardware/software co-development
  • It provides a platform for connecting real peripherals and test equipment
  • Real peripherals and test equipment provide a better “visual” during customer and stakeholder demos

I think the real value of emulation is in these bullets that focus on the application level view of a piece of hardware, the platform it provides for software developers and how a product is used in a real system (or at least real’ish). If I’m right, then I’d like to think the question becomes how do you get to an emulator as soon as possible so you can start realizing this value. I have a feeling this is where agile comes in… incremental development specifically as well as everything it takes for great x-functional teamwork.

I have my thoughts on how teams get started with agile development using agile development using simulation… I’d like to hear how others think it could work with emulation.


Q. Are you an emulation expert that has seen the path of least resistance when it comes to getting product deployed to an emulation platform? What does it look like?