So this is one of those posts where after a short conversation with a colleague, something jumps into my head and I end up asking myself I wonder if this makes sense? The idea has to do with formal verification, which is not my area of expertise, so I figured the best thing for me to do is just get it out. From there, real experts can discuss whether or not it makes sense (or maybe it’s something experts already do in which case I’m late to the party and would appreciate somebody straightening me out :)). Continue reading
You’re in a training course. It’s noon on Friday and more than four days have just flown by. You’ve covered several different topics; some you like and some you don’t. A few you want to start using; others not so much. You’ve learned a lot of new things and it’s been a great week. On Monday though you’re going to get thrown into the deep end, all by yourself, where you’ll need to apply your new knowledge. What do you do?
This is what we did.
In early December I delivered a week of hardware TDD training. It was a good week with lots of questions and discussion (not to mention a lot of TDD’ing). At the conclusion of the week, we needed a way to help the team take what we practiced and carry it over to the following Monday and beyond. We had a few choices: Continue reading
Fair to say that what we’ve posted on AgileSoC.com to date is decidedly pro-agile. Bryan, myself and the guest bloggers we’ve had thus far believe in agile hardware development so we haven’t spend much time talking about why agile hardware wouldn’t work. No surprise there. But when you’re getting a steady diet of opinions from one side of an argument, it can be easy to forget that there can be some very practical arguments on the flip side to the coin. Today – after a little cajoling from Bryan over the past year – Mike Thompson from Huawei in Ottawa brings a little balance to AgileSoC.com by examining the flip side of the coin. Continue reading
Up until now, we’d been discussing the justification of using TDD in an ASIC development flow. Hopefully, we’ve convinced you to try it. In this post we’ll introduce a TDD framework that has been developed for SystemVerilog to help you use this design technique.
A couple of weeks ago, just after we got started with TDD month, Neil added the link to the posts on several industry forums, and got this comment from Alex Gnusin on the verification guild:
“Is it a Designer responsibility to test each line of code? In this case, there is a need to provide designers with working methodology to verify their code…”
To which Neil responded:
Alex: I’m not sure if you guessed we’d be covering the topic of a working methodology – aka: unit test framework – but if you did, I’d like to thank you for that nice bit of foreshadowing!
Alex is right, a proper framework is pretty important for anyone doing TDD; primarily because it gives you the opportunity to get up and running quickly.
In the software world, there are are number of wildly popular unit test frameworks. JUnit might be the best known (for those doing java development) but there are about a million others (as you can see with a quick trip over to Wikipedia). A unit test framework is critical for TDD, that’s why myself and Rob Saxe (both formerly of XtremeEDA) put one together a couple of years ago for people wanting to do TDD with SystemVerilog.
First presented at SNUG San Jose in 2009, SVUnit is a unit test framework that provides:
- structures for creating and running unit tests
- utilities for reporting test status and results
- a test “runner” that executes a set of test suites and report on their overall success/failure
- scripts to automate the mechanical aspects of creating new unit test code
Whenever I describe the agile practice of pair programming I usually get the same general reaction,which is something like “I don’t think I’d like to do that”. My usual attempt to convince someone that it could be an interesting tool to try is to suggest that verification team could do it for the complicated sections of code, or to have the designer and verification pair program to develop the drivers. While good reasons, these were never enough. This posting will be another attempt on my part to show the value of pair programming, and to provide some suggestions on where programming could be effective for you, and some reasons on why pair programming may not be effective.
A handy bit of guidance that I’ve gleaned from books on lean product development comes via the recognition that unfinished work sitting in someone’s in queue, which in lean manufacturing lingo is called inventory, is waste in your development process. Lean software practitioners take things a step further to describe untested and/or unreleased code sitting on a file server as inventory. I reckon doing the same can benefit us in hardware development.
It makes sense to me that built up inventory can be responsible for poor productivity or quality. Why? Think about it… do you do a better job when the amount of work in your queue is manageable or overwhelming? Do you work better when someone shows up at your cube with 10 feature requests or 1? Do you work better under the weight of 25 outstanding bug reports or no bug reports?
I’ll take manageable over overwhelming any day so I can see minimizing the amount of untested and/or unreleased code… I mean inventory… in your development process over time is a good thing. Minimize inventory and you’re minimizing waste. Minimize waste and you’re maximizing productivity and quality.
How about an example that applies to hardware development.
This post is a little different than what I’ve been doing here so far because it’s directed at only 1 person. This person is new to agile. We have talked about it a few different times and last time we talked she mentioned:
“most of what you have on AgileSoC.com seems to assume you already know about agile… I wish you had some stuff for beginners.”
Point taken. Here’s a post just for you… and any other beginners that want to stick around.
This post is slightly off-topic of our usual AgileSoC theme; instead it is aimed at you Emacs folks who use org-mode and have been asking yourself “How can I combine Kanban and org-mode?” Here’s how I did it. It’s not rocket science, but it is very handy.
OK, I admit that I’m an Emacs geek. I’ve used Emacs for a very long time, and have always been amazed at how many powerful, useful features are created for it. The one feature that I’ve used for years is the org-mode. org-mode is an organizer, note taker, outliner, planner, etc. Check out the web-site (orgmode.el). It has so many features, I’m sure that I don’t know them all.
I use org-mode for three things:
- track my tasks;
- my time associated with each task, and
- a short journal of what I do every day (including any research).
All this information is located in one spot and org-mode allows me to keep it organized and easily searchable (it’s a text file).
Being an Agile-Emacs geek, I’ve been trying to use web-based and desktop systems keep track of my personal to-do lists using Kanban. For the stuff I do I found Kanban provides me with the best way to track and report what I’m doing. After being mildly impressed with many of these tools, I never got into using them because they required me to get out of Emacs and get into a browser. Using a GUI after being used to a straight text entry was also a little more trouble than I wanted. Lastly, the data is usually in a proprietary format — that would be my data is now stuck in a proprietary format.
Being an Agile-Kanban-Emacs geek, my next step was to Google “emacs org-mode kanban”. No luck. What follows is my solution to creating a Kanban board using org-mode.
Kanban Board meet Org-mode Tables
org-mode provides an awesome table editing facility. Once you’ve defined what the table looks like it automatically resizes the table to balance any new text you add. My first step in creating my Personal Kanban board was to create a org-mode table.
My workflow (i.e., indicated in the header row) is pretty generic and straightforward:
- Backlog (product stories not started yet);
- Analysis (think, think, think);
- Implementing (do, do, do);
- Debugging/Testing (ensuring that it’s right); and finally
- Done (proven correct and accepted).
Product Story meet Org-Mode Internal Links
Now I need to add my Product Stories to my backlog. For this I use the org-mode markup language to add internal hyperlinks. Links are created in org-mode using the format:
`[[link]]. The target is simply:
<<target>>. Here’s a picture of my first Product Story added to the table.
As you can see, org-mode has done several things:
- readjusted the table to accommodate the new “Product Story 1” link in the Backlog column;
- Translated my input of the link
`[[Product Story 1]]to the
`Product Story 1hyperlink. NOTE: If you click on this link with your mouse the cursor moves to the target line indicated by
<<Product Story 1>>.
I use the org-mode tags
:STORY: so that I can filter on those for reporting, and I use the priority
[#A] to indicate the importance of this story.
The next thing I do is add the sub-tasks that make up the Product Story and History to capture what the heck I did for each sub-task.
I continue to add new Product Stories and sub-tasks as above, until I’ve got a decent size Backlog.
Now, I start working. I’ll move Product Stories across my work flow, update history and complete actions. Here’s a snapshot where I’ve done some work.
OK, that picture is getting a little busy now, but the interesting elements are:
- the Kanban board automatically re-sized as I moved the Product Story links from column to column;
- I can use the org-mode folding capability (notice that
<<Product Story 1>>ends with ellipsis … i.e., “:BACKLOG:STORY:…” — the ellipsis indicates that some text has been folded, or is currently hidden from view. Use the tab key when the cursor is on the line to toggle the fold/unfold);
- I’m using the org-mode time-tracking feature to capture how much time I spend on each Product Story (captured in the lines indicated with
CLOCK:). org-mode has a way to generate a report summarizing the time spent on each Product Story.
- There’s even an iPhone app (http://mobileorg.ncogni.to/) so that I can update my Personal Kanban board on my iPhone. You can’t get sexier than an iPhone App; unless you’re not an Agile-Kanban-Emacs-iPhone geek. 😉
I’ve found it extremely easy to use Kanban and org-mode to capture and track my work. If you’re a power org-mode user and can suggest other features of org-mode that would be useful in this context — I’d be all ears.
Pomodoro meet Emacs
Just to be clear. I’ve been in management for over 15 years, and I pride myself in that I’ve never micro-managed anyone that I’ve supervised (for those of you who I have worked with, here’s your opportunity to contradict me). However, I love to micro-manage myself. I’ll break down any significant work until I have a set of tasks around 0.5 to 1.0 day in duration. I need that for two reasons:
- I’m too methodical (aka ‘anal-retentive’) to not capture a plan;
- I have the attention span of a gnat. If there’s a “bright shiny object”… I’ll be attracted to it. Creating a set of a short sub-tasks means that I can progress toward the ultimate goal, and then as a reward take a break and go look at that bright shiny object for a bit, and then go onto the next task.
For the second issue of focus, I’ve been using the pomodoro technique for several months. Go check out the web-site for more details on how to use this technique. Again, I tried browser-based, desktop, and iPhone applications to track my pomodoros for me — but they were even worse than the Kanban applications I alluded to above for keeping me on track. As most of day is spent looking an Emacs screen — that’s where I need to be notified when a pomodoro is over. Once again some kind Emacs elisp coder created pomodoro.el. This allows me to start a pomodoro timer within Emacs; it automatically starts the short and long breaks and the next pomodoro; allows me to stop and rewind the pomodoro (for when I’m interrupted), and keeps a crude track of how many pomodoro’s I’ve completed. Check out the elisp code for more details.
You may find using pomodoro will drive you nuts… or you may love it. But if you’re an Agile-Kanban-Emacs-iPhone-Pomodoro geek, you’ll probably love it.
In a previous post, When Done Actually Means DONE, I shared a slide that I’ll present at Agile2011. I use it to illustrate the differences between waterfall and agile development models in the context of hardware development. After posting that, the first response I got was examples could maybe make it even clearer from AgileSoC guest blogger @rolfostergaard.
Thanks Rolf. Good idea.
In case you haven’t read When Done Actually Means DONE, I’ve included the slide I used to get things started again in this post. I use it to show that there are different ways to describe how done you are based on the development model you’re using. If you’re basing progress on tasks you’ve completed, you’re using done to measure progress. If you’re using features, you’re using DONE.
What’s the difference? Being done means you’ve hit a milestone that won’t hold water mainly because there’s no way to objectively measure its quality. You may think you’re DONE, but without tests or some other reasonable way to measure quality, you’ll very likely need to come back to it. For that reason, done is misleading and it gets people into trouble.
DONE means you’ve hit a milestone that you can unambiguously quantify (or at least quantify with far less ambiguity). Here, you’re confident that what you’ve just finished will require very little or no follow-up because you can see and measure results.
In short, done isn’t done at all… but DONE is. Confused? Here’s where a few examples might help.
My RTL Is Done
Classic. Your design team is under pressure to meet a scheduled RTL complete project milestone. As always, it’s a high visible milestone – to the development team, management and possibly even customer – because it comes with the connotation that the product is nearly finished… save for the minor details that it hasn’t been verified nor has it been pushed through the back-end. The RTL is done though, so that’s great. Cross it off the list!
My Test Environment Is Done
This is a close second to my RTL is done. Your verification team has finished its test environment and supposedly all there is left is writing and running tests. Though of course there’s been very little to confirm that the test environment does what it is supposed to do. That’s immediately obvious when running the first test: the configurations are invalid, the stimulus transactions are poorly formed, the BFMs don’t obey protocols and the model is outdated; all unfortunate because now people are anxiously expecting results that the environment can’t quite deliver yet! Sure the test environment is done… except for everything that doesn’t work.
Feature <blah> is DONE
Now we’re getting somewhere. No, your RTL isn’t done. No, your verification environment isn’t done. But who cares! You have something better: a small portion of both are DONE and that’s enough to run tests and collect coverage results that verify feature <blah> is ready to go. No ambiguity there. The feature works and you have the passing tests to prove it. You’re delivering something that’s DONE.
(Ideally, you would have passed the design to the physical team as well. But by the fact that you’ve made a big step forward in terms of credibility and confidence relative to the first two examples, we’ll forget about the physical design for now.)
The Software API Is Done
A hardware team normally implements an API according to a spec received from the software team. After the hardware team is done, it’s assumed that sometime later the software team will build drivers and applications on top of the API and release the finished product. Problem is, the initial API was a best guess from the software team and early in their development, the team finds that the API that’s been sitting done for a couple months doesn’t give them the access to the hardware that they need. Sure the API was done, but until it’s updated, system performance is seriously restricted and some functionality is completely absent.
The Software Demo Is DONE
An SoC, by definition, is a part hardware, part software solution. So why settle with an API that’s done when the software is required for delivery? As the hardware team completes API functionality, give it to the software team so they can actually use it. Deliver it as a C model, an emulation platform or some other form that makes sense. Use this demo version of the entire solution (hardware + software) to judge whether or not you’re DONE.
I’m Still 90% Done
I’ll end with a personal favorite that kind of fits into this discussion. Like everyone else, I’ve played this card many times. What does 90% done mean? It means you think you’re almost DONE but you really have no idea because there’s no way of knowing for sure. Before you say it again, do yourself a favor:
- admit that you don’t know how DONE you are
- find a way to measure what you think you are DONE
- isolate what you aren’t DONE
I’m going to try and follow my own advice on that one :).
Done is a false milestone. It’s ambiguous. It’s one dimensional; you may have written some code but that’s it. There’s no reliable way to measure done and teams that measure progress in terms of done eventually find they’re not as DONE as they thought.
DONE on the other hand, comes with results. It’s multi-dimensional; you’ve written some chunk of code and it’s been tested so you know it works. DONE is measured in passing simulations, software demos and any other means that objectively confirm the code you’ve written is high quality. Teams that measure progress in terms of DONE know how far they’ve come and how far they have to go.
Done is a feeling. DONE is progress.
Q. What examples of done do you see in SoC development?