Agile Requirements: Are We There Yet? Part 1 of [Not Sure Yet]

I’m at the start of a new project. We’re currently determining what features are needed in the end-product. This has led to me to thinking a lot lately about how to capture, prioritize and track the completion of these requirements as we progress through the project. Of course, I want to do this in an ‘agile’ way.

The typical agile method is to capture requirements in a set of high level “users stories”. User stories typically provide a concise way of describing what the system will do from the user’s perspective. Several templates of a user story exist; Mike Cohn’s being the one that I think is clear and concise:

As a <type of user>, I want <some goal> so that <some reason>”.

It captures the who, the what and the why of a feature – providing some essential context to the assist in the implementation, validation and acceptance of a feature.

User stories are then typically split into progressively smaller use-cases, or tasks until you establish a firm understanding of what the feature is supposed to do, and how your team is going to complete the functionality as the user expects.

I found Cohn’s template particularly effective when creating some tool or script that I needed to create because it is much easier to comprehend than a typical requirement statement of “The system shall….”.

But… if there’s one aspect of agile that I really think is difficult to translate into an ASIC/FPGA world, it’s the concept of user story. Perhaps it’s because I simply don’t have the experience in writing user stories. But I think it has more to do with my trouble in writing an ASIC story from a user’s perspective. In many cases, a feature does not even provide any meaningful output that is visible to the user e.g., some esoteric standards requirement about how an interface must be defined.

Then serendipity came through for me, when my former colleague and AgileSoC collaborator Neil Johnson was in town on vacation and wanted to grab a beer. He knew James Grenning , who happened to be in Ottawa at the same time to deliver some training, and invited him along too. I’ll cut it short by saying I had a very intersting conversation with James and Neil. You see James is an expert in Test Driven Development for embedded systems. In fact, he’s written a book on the topic Test Driven Development for Embedded C. It was very interesting to discover the similarities that exist between embedded system design and SoC development. Including the perceived and real barriers to adopting an agile flow in an embedded design.

To cut to the chase, we got to talking about user stories and how they don’t really fit into the ASIC development, when James suggested that he calls user stories “Product Stories” instead. While its a simple semantic change of “user” to “product”, for me it was a lightbulb moment. It helped me identify the issue I’ve been having with ‘user stories’: I’m not implementing something for an end-user; I’m developing a product that will fit into a larger system that will ulimately be seen by a user.

Like User Stories, Product Stories are intended to be very high level descriptions of the features that you want to develop along with the acceptance testing that proves the feature is what you expect.  The key is that these are short descriptions from the system’s perspective. Instead of the whowhat and why as in a user story the product story for ASIC development should include the where,what and why. Specifically, a good requirement contains the following elements:

Behaviour: (the what) A short descriptive story explaining the behaviour at a high level.
Justification: (the why) Some idea on why this is important to the system (unless it’s absolutely clear e.g., some standard protocol);
Context: (the where) Where in the system this feature is needed
Acceptance: simple set of acceptance tests to prove that behaviour is correct.

I’ve been developing the following template, that I’ll be considering to use for my Product Stories, which is similar to the ‘user story’ template above.

“The <context> needs to <do some behaviour> so that <some reason> “.

A product story provides some clear guidelines on the functionality expected, as well as a clear indication of what coverage points must be captured before you declare the product story complete. In fact, I think it might be a good idea to stating these acceptance criteria as functional coverage points.

As I work through these concepts in my head, I’ll attempt to describe them in in my next few blog posts.  I’ll talk about how I see these Product Stories being created, developed and tracked in an AgileSoC world.

I’d welcome any comments on this post, and especially how you are doing your requirements management – especially when requirements are very fluid (which, I’m guessing, is probably most of the time for most of us :-)).

 

8 thoughts on “Agile Requirements: Are We There Yet? Part 1 of [Not Sure Yet]

  1. A couple of quick notes:

    1) User Stories were originally just called “Stories” in Extreme Programming, as in “Tell me a story about what this system is supposed to do.”

    2) The “As a …” format actually originated with a project at a now defunct company called Connextra in the UK in 2001, and was popularized by Mike. It isn’t, however, the only way you can write stories!

    James’ suggestion is a very good one, and holds to the intent of stories that they should focus on the business purpose rather than the technical one.

    Another view is that you can substitute Actors for the Role if you have come from the Use Case world. 🙂

    1. Dave:

      Thanks… I hadn’t realized “stories” were ever generic. But I do see the value in adding some context to the plain old ‘stories’ i.e., ‘user’ stories. A simple semantic change can add lots of value (at least to some).
      re: user story format. Thanks again. You have a wealth of history! Another point for us hardware folks to take a page (or ten) from the software world.

      Thanks again for the comments.

        1. Unfortunately, that particular page gets torn out of the book sometimes — due to time to market pressures, resource constraints, or [insert laundry list of reasons here]. A second spin of the hardware is almost inevitable these days.

  2. Interesting post. My team has been using Agile practices for FPGA development for 18 months. We too had many of the same problems trying to relate the FPGA capability into user stories. The functionality of the FPGA was so deep in the bowels of the product that it was very hard to get to users. The wording of the stories became very contrived, often quite silly, when put into the user story template. As Scrum Master, my credibility was suffering when I tried to get the team and Product owner to use the “user story” template. While we did not call it as such, we too ended up writing “product stories” rather than user stories. The users of our FPGA functionality were other pieces of h/w and s/w, not human users per se.

    One further point is that I think that requirements can go even more granular that the story level. We found that our smallest stories became a collection of requirements, even if it was a small collection of requirements. Rarely was a story a single requirement. Here is what we did to manage the extra granularity.

    The team created a Definition of Done (DoD). We like to think of the DoD as something that is common to all stories (with a few exceptional cases). The DoD is a list of the process steps needed to declare a story to be “done-done”. One of the items in the DoD is that we can demonstrate the story Acceptance Criteria (AC) both in simulation and on target h/w. So each story has a small list of Acceptance Criteria which represents the requirement granularity that is finer than the story level.

    While working on a story, product design work and verification “design” (conscious word choice) work are done in parallel. The verification design work includes creating stimulus, updating he behavioural model, and correctness checking. We currently use SystemVerilog and OVM for the verification environment. We use the constrained random capability of SV for the stimulus. For the purpose of demonstrating the story AC in simulation, we take the constrained random code, but massively constrain it until it essentially becomes close to directed test cases for the AC.

    Taking it one step further, we use the functional coverage coding constructs in SV (cover points/groups) and write coverage measurement code that corresponds to the AC (aka requirements) for the story. We are thus measuring the ability of the constrained random stimulus (even though severely constrained) to create the conditions necessary to demonstrate the AC. In a sense the stimulus and checking in the verification environment are used to pass judgement on the product design, and the functional coverage is used to pass judgement on the quality of the stimulus. Two orthogonal views of the verification.

    Once the story is demonstrated and declared to be “done-done”, both the product code and verification code are submitted into the release branch of our code configuration management too. We further modify the constraints on the stimulus to make then looser than what was used for the purpose of verifying the story. We then have automated simulation regressions that run every night with this looser set of constrains so that we can get deeper corner case coverage over time for the story just submitted. The automated regressions use a different seed value every night so that the stimulus is not identical night to night, but has similar properties. We can use thus to accumulate coverage over time.

    We also have a “stop and fix” mentality for the regressions. If an overnight regression fails, it is the top priority for the team to fix what has been broken. The seed value for the next regression stays the same until the regression tests once again pass, then the seed values start advancing again.

    In summary, we believe that a hierarchy of stories and Acceptance Criteria is a good method to capture requirements, and we have mapped these concepts into our overall development environment and practices for Agile.

    Cheers … Alan

    1. Alan:

      Thanks for the very detailed response. You guys are definitely in advance of most of the industry in accepting these “new” ideas. I especially liked the DoD: I’ve heard about it in other papers but never presented in such a concrete way. My thinking on this one (as will be expanded in subsequent posts) was to take that checklist and have it become part of a Kanban board on a whiteboard i.e., the columns of the Kanban board are the steps in checklist. Admittedly, as our team has been building the list the number of steps — representing the best practices — continues to grow. Probably beyond the point where a normal sized whiteboard could hold that number of columns. So either I use multiple whiteboards, or reduce the granularity of the columns. The first is acceptable, the second not so much.

      So your post has got me to thinking that since the AC will differ from task to task perhaps putting into one generic Kanban board is not the way to go. I’ve got some thinkin’ to do.

      Thanks again for your thoughtful response.

      1. Bryan, that is an interesting thought about using Kanban for managing the Definition of Done (DoD). I am only mildly knowledgeable about Kanban so I would have to do some reading and thinking to see if this is a valuable way of managing things.

        While this should be a simple concept, it took our team a while to “get it”. The DoD and Acceptance Criteria (AC) are not the same thing. We were using the terms interchangeably and it did add to some early confusion in the team. Once we did get it, we realized that the DoD is, in general, identical for each story. I say in general because there can be some exceptional cases, but those should be minimized. The AC, however, is unique for each story. Demonstrating, or testing, or verifying, or whatever you call it, confirming that the AC has been satisfied is one item on the DoD.

        For now we manage the DoD as a simple checklist. I won’t give our exact DoD but you can envision it is something like this:
        – Design code submitted into release branch
        – Verification code submitted into release branch
        – All code passes lint checking and defects have been corrected
        – All code has been inspected and defects have been corrected
        – All tests for the story pass dynamic simulation and confirmed with Functional Coverage.Note that the tests and Functional Coverage align with the AC.
        – The submitted code has passed at least one overnight regression suite, including extensions made for this story (new stimulus, widening of constraints, plus all previous stimulus).
        – The AC has been demonstrated on target h/w (easier to do this with FPGAs than ASICs).

        So if it is a simple checklist, it may not be necessary to manage this with Kanban. I would have to give this further thought, however.

        The AC is unique per story. Other than asking if the AC has been satisfied, our DoD list does not track AC specifics. Rather we track this in our verification plan for simulation, and can confirm that the AC has been satisfied with the simulation scoreboard and transcript, plus the Functional Coverage report. All the simulation artifacts are automatically saved and are accessible via a Wiki page. For tracking that the AC is satisfied on target h/w, we are currently using a manual spread sheet although we are looking into a Wiki format for this as well.

        Cheers … Alan

  3. By the way, I did get to meet James Grenning when he was in Ottawa several weeks back. His views on Agile and testing, particularly TDD, were both informative and inspiring.

    Alan

Leave a Reply to Alan Dunne Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.