By Example: Done vs. DONE

In a previous post, When Done Actually Means DONE, I shared a slide that I’ll present at Agile2011. I use it to illustrate the differences between waterfall and agile development models in the context of hardware development. After posting that, the first response I got was examples could maybe make it even clearer from AgileSoC guest blogger @rolfostergaard.

Thanks Rolf. Good idea.

In case you haven’t read When Done Actually Means DONE, I’ve included the slide I used to get things started again in this post. I use it to show that there are different ways to describe how done you are based on the development model you’re using. If you’re basing progress on tasks you’ve completed, you’re using done to measure progress. If you’re using features, you’re using DONE.

What’s the difference? Being done means you’ve hit a milestone that won’t hold water mainly because there’s no way to objectively measure its quality. You may think you’re DONE, but without tests or some other reasonable way to measure quality, you’ll very likely need to come back to it. For that reason, done is misleading and it gets people into trouble.

DONE means you’ve hit a milestone that you can unambiguously quantify (or at least quantify with far less ambiguity). Here, you’re confident that what you’ve just finished will require very little or no follow-up because you can see and measure results.

In short, done isn’t done at all… but DONE is. Confused? Here’s where a few examples might help.

My RTL Is Done

Classic. Your design team is under pressure to meet a scheduled RTL complete project milestone. As always, it’s a high visible milestone – to the development team, management and possibly even customer – because it comes with the connotation that the product is nearly finished… save for the minor details that it hasn’t been verified nor has it been pushed through the back-end. The RTL is done though, so that’s great. Cross it off the list!

My Test Environment Is Done

This is a close second to my RTL is done. Your verification team has finished its test environment and supposedly all there is left is writing and running tests. Though of course there’s been very little to confirm that the test environment does what it is supposed to do. That’s immediately obvious when running the first test: the configurations are invalid, the stimulus transactions are poorly formed, the BFMs don’t obey protocols and the model is outdated; all unfortunate because now people are anxiously expecting results that the environment can’t quite deliver yet! Sure the test environment is done… except for everything that doesn’t work.

Feature <blah> is DONE

Now we’re getting somewhere. No, your RTL isn’t done. No, your verification environment isn’t done. But who cares! You have something better: a small portion of both are DONE and that’s enough to run tests and collect coverage results that verify feature <blah> is ready to go. No ambiguity there. The feature works and you have the passing tests to prove it. You’re delivering something that’s DONE.

(Ideally, you would have passed the design to the physical team as well. But by the fact that you’ve made a big step forward in terms of credibility and confidence relative to the first two examples, we’ll forget about the physical design for now.)

The Software API Is Done

A hardware team normally implements an API according to a spec received from the software team. After the hardware team is done, it’s assumed that sometime later the software team will build drivers and applications on top of the API and release the finished product. Problem is, the initial API was a best guess from the software team and early in their development, the team finds that the API that’s been sitting done for a couple months doesn’t give them the access to the hardware that they need. Sure the API was done, but until it’s updated, system performance is seriously restricted and some functionality is completely absent.

The Software Demo Is DONE

An SoC, by definition, is a part hardware, part software solution. So why settle with an API that’s done when the software is required for delivery? As the hardware team completes API functionality, give it to the software team so they can actually use it. Deliver it as a C model, an emulation platform or some other form that makes sense. Use this demo version of the entire solution (hardware + software) to judge whether or not you’re DONE.

I’m Still 90% Done

I’ll end with a personal favorite that kind of fits into this discussion. Like everyone else, I’ve played this card many times. What does 90% done mean? It means you think you’re almost DONE but you really have no idea because there’s no way of knowing for sure. Before you say it again, do yourself a favor:

  1. admit that you don’t know how DONE you are
  2. find a way to measure what you think you are DONE
  3. isolate what you aren’t DONE

I’m going to try and follow my own advice on that one :).

Done is a false milestone. It’s ambiguous. It’s one dimensional; you may have written some code but that’s it. There’s no reliable way to measure done and teams that measure progress in terms of done eventually find they’re not as DONE as they thought.

DONE on the other hand, comes with results. It’s multi-dimensional; you’ve written some chunk of code and it’s been tested so you know it works. DONE is measured in passing simulations, software demos and any other means that objectively confirm the code you’ve written is high quality. Teams that measure progress in terms of DONE know how far they’ve come and how far they have to go.

Done is a feeling. DONE is progress.

-neil

Q. What examples of done do you see in SoC development?

Applying Agile To IC Development… Coming To A Webex Near You!

“Applying Agile To IC Development… We’re Not That Different After All” is part of the program for Agile2011, Aug8-12 in Salt Lake City, UT. I do plan to make video available to www.AgileSoC.com visitors… but video never seems to be as good as the real thing!

If you’re part of a group that’s interested in hearing the talk, feel free to get a hold of me at neil.johnson@agilesoc.com!

-neil

If Developing Hardware Was Like Going On A Hiking Trip…

…where do you think you’d end up?

Here’s a goofy story I put together as part of an AgileSoC article that likens some of the daily trials and tribulations of hardware development to a day out in the wilderness. Seems some of the odd things we end up doing never look odd until you see them done somewhere else!


Imagine yourself on a day-long hiking trip with your team. It is early morning and everyone has gathered at the office, each with a backpack with a day’s worth of food and water. A driver in a 15-passenger van pulls up. The team piles into the van and heads out to point A, a secluded little place just south of the middle of nowhere. The goal for the team is to lay claim on behalf of the company to point B then call in for a ride home.

From point A, point B is about 102,000 paces at a bearing of 43.7 degrees. Since no one on the team has ever been there, your executive team–which coincidentally has never been there either–has held information sessions to describe point B. Only half the team has attended these sessions due to conflicting responsibilities.

After arriving at point A, the first thing the team does is split into four small groups. Based on their understanding from the information sessions, each group devises a plan for how it intends to get to point B: one group draws a map, two others write detailed, step-by-step directions and the last team talks about how to get there but does not end up with an exact plan. There is some chatter and discussion between groups, but in the end, each decides independently on how best to get to point B. Plans in hand, everyone agrees to meet at point B at 5:00pm and the groups strike off.

Just as they get started, a company executive speeds out to point A (in his Ferrari) and calls one group back. He needs them to stay behind for an hour to clean out the van so it can be returned to the rental agency. As they clean the van, the executive makes a comment about point B being 5,200 paces north of where was originally suggested though he also admits he cannot be certain. He then jumps back in his Ferrari, offers his best wishes to the groups and speeds off.

And so it follows that each group makes its way toward point B, one starting an hour later than the others and each following its own plan. The groups do not see each all day.

Only one group shows up at point B by 5:00pm. At 6:00pm, a second group shows up. Shortly thereafter and out of breath after having sprinted the final mile, the third group arrives. The last group, however, never arrives.

At 8:30pm, the three groups at point B scramble around nervously until eventually they stumble into the final group–which coincidentally was waiting roughly 2,500 paces away. It is now 10:00pm and darkness has set in. The newly rejoined team resigns to the fact that it will not be able to find its way back to point B, nor lay company claim to point B, so they call the executive team with their new coordinates for a ride home.

The next morning, a different driver in a different van finally shows up to take the team back to the office (changing companies saved 20% on the cost of the rental!). Your team, now ragged, tired and hungry, staggers into the van and heads back. The team arrives at the office to the deflating news that point B was not point B at all (it was actually point C). Your executive team has decided that you and your team will be leaving the next morning to lay claim to what is now certainly and undeniably point B; no question about it.


neil

Q. Have any experiences that you’d add to a day like this?

Help Us Embedded Software Developers… You’re Our Only Hope

Agile2011 will be my first time talking to crowd of embedded software developers. I already know tossing my hat in the ring has been a good idea because the whole experience has me thinking from a different direction. I’m starting to see the necessity of better collaboration between hardware and embedded software experts on SoC development teams, but it didn’t really gel until I started thinking in terms of system functionality and usability.

I’ve used this slide with a generalized design flow once before with a mishmash group of software developers to illustrate how our respective disciplines tie together. It’s a 5000ft view of an SoC development meant to show how we work on two different ends of the development spectrum.

We hardware developers live down in the weeds. We see things like gates, counters, filters, packet transformation, the details of an interrupt mechanism and pin-level communications protocols. Our deliverables are normally captured in a hardware specification and our job is to build the hardware such that it’s functionally correct before it’s taped out. We are not users nor do we have much direct contact with users so the concept of functional correctness is probably the best we can do.

Software developers are at the far end of the spectrum. Gates and pin-level protocol mean nothing to them. Their job is to take the hardware we’ve given them and make it usable for our customer.

A potential problem with this situation, in case you haven’t seen it yet, is that hardware that’s functionally correct doesn’t necessarily translate to hardware that’s usable. We hardware developers can stew over clock-by-clock details that have zero impact on the system, while at the same time brush off details that can cause head-aches for software developers or even cripple the system entirely. Corner cases for us can be primary functionality for them.

So how can we close the gap between functionality and usability? I think there’s work to do on both sides of the fence.

Embedded Software Developers

  1. Get involved early with the design of the hardware and stay involved
  2. Help your hardware teammates think in terms of system-level usability
  3. Include hardware developers in customer correspondence and see that issues of usability are broadcast to the entire team
  4. Develop software as the hardware is being developed so that your feedback can be used to tune the hardware

Hardware developers

  1. We aren’t users so it’s difficult for us to think like users. Realize that in terms of usability, we generally don’t even know what we don’t know
  2. Help your software teammates understand the possibilities and limitations of what you can do in hardware
  3. Work together with software developers during design and implementation of APIs instead of guessing what software developers might need or waiting for them to tell you
  4. Supply software developers with early prototypes or models to enable early software development
  5. Don’t optimize hardware without considering the impact on the software

Management/Leaders

  1. Treat hardware and software developers working on the same SoC as part of the same team
  2. Ensure hardware and software teams are co-located (or at least partially co-located) so communication is productive
  3. Use a common reporting structure for hardware and software developers to avoid personnel conflicts
  4. Enable and strongly encourage co-development of hardware and software

All these are things I thought of while putting together my Agile2011 talk. I like the functionality and usability labels to emphasize that the focus of hardware and software experts is fundamentally different. I’m hoping bringing functionality and usability together underscores the critical importance of early collaboration with software developers. They understand the system and interact with users in ways that we don’t.

Considering everything they do know with everything we don’t, if we plan on having happy customers, they’re our only hope!

neil

Guest Blog: Demo Driven Technology Projects

By: Rolf V. Ostergaard

Far too often technology projects leads to a lot of fun work with very low productivity – and less useful results. A demo-driven scrum-like approach is a good way to fix that.

A technology project is…

Let me start by describing what I term a “technology project”. Most development is directly focused on building a product. Often with a lot of focus on cost price, schedule, manufacturability etc. Risk is avoided and solutions that fit the cost and schedule with low risk are preferred.

A technology project is intended to take more risk, do something new, and do this in a setting where the development of new technology is the goal. This is done outside of product development projects, to make the risk acceptable. Done right this actually reduces the risk of one or more following projects, where the technology can be used in new products.

Most engineers love technology projects. For the same reasons, managers often use them to create variation in the work for the engineers. It’s like a treat for engineers. Sounds a bit nerdy, I know. But it works.

Show, don’t tell

The problem with many technology projects is that they can feel less important than “real” product development. But the contrary is true, they are more important. Without technology projects, the innovation suffers and new revolutionary products become to risky to do.

Our solution to this is a scum-like approach with demos. Each 2-3-4 week iteration, should demonstrate something. Preferably physical and preferably progress. The mantra here is “show, don’t tell”.

Sometimes it requires a lot of creativity to “show” something that can’t be build as a physical object yet. But eventually it gets there and the demos become more and more interesting.

At other times the progress is negative – we are in a dead end and need to dig ourselves out again. That’s good too, because the cost of doing this in a technology project is much lower than in a tight product development project. That is risk reduction at work.

We would prefer to always hold the demo, regardless of the results. “Negative progress” requires more understanding from all stakeholders – and a demo is really a good way of getting everybody to understand.

If you read about Scrum and demos, you will hear a lot about how important it is to demo the real working product in a realistic user setting. I think this is besides the point here. For projects involving hardware, software, FPGA/ASIC, mechanical and other aspects, this may be completely unrealistic. Making a demo – any demo – is however always possible. And this is much better than not trying. More about making a demo, any demo will be discussed in a subsequent entry.

Get feedback

Part of the purpose of a demo is collecting feedback. Both from other engineers on ideas for improvement, potential risks, things to try, test conditions to consider and from the other stakeholders on where the value is, what would improve the solution etc.

Most technology projects tend to have one or two applications in future products as their primary targets. Holding regular demos is a also a really good way to get other application ideas identified as well. When a broader group get to see what can be done with this technology, feedback in the form of “Could this be used for…?” is inevitable. This is a good catalyst for further innovation.

Is this Agile?

This method of making progress very visible and forcing easy to understand demos throughout a project helps in many ways. Does it make the project more agile? I believe so in many ways, but that may not be the primary reason for doing it. The more important reasons is to keep focus on the projects actual results and to make the project more visible.

A very nice side benefit comes from the way motivation works. If you are doing something that matters to others (and hell yes, we want to see a demo – anytime!), you are immediately much more motivated. Motivation improves productivity dramatically.

All in all this demo-driven approach to technology projects achieves three good things:

  • Productivity is boosted through the motivation to do good demos
  • Project visibility is increased, which also helps spread the knowledge
  • Innovation is regularly inspired by the technology demos

Hopefully this served as a bit of inspiration on how to improve outcome, productivity and motivation in these traditionally difficult technology projects.

Q: How do you make technology projects visible and productive in your organisation?

Rolf V. Ostergaard is a M.Sc.EE from Denmark, who got his entrepreneurial inspiration by working in Silicon Valley back in the dot-com days. He co-founded the consulting business Axcon in 2004 and grew it to 20+ people focused on improving development of embedded systems – hardware, software, FPGA – the guts of smart devices. Rolf is specialised in signal integrity and enjoys doing training and consulting to fix SI problems before they occur. Find him blogging on www.axcon.dk/blog and as @rolfostergaard on Twitter.

When Done Actually Means DONE

In presentations I’ve given on agile hardware development, there’s one slide I have that seems to get the point across better than any other as far as how agile development differs from the waterfall-like process a lot of hardware teams follow. I’ve used it a few times before and I find myself counting on it again for my talk at Agile2011 in Salt Lake City.

In prior talks, I’ve built up to this slide with a verbal contrast between waterfall and agile. I talk about waterfall development as a sequential process that culminates with one big bang deliverable. Progress is tracked against a master schedule and based on what pre-defined tasks have been completed. The lessons learned at the end of a project are applied to the next project (i.e. you have a chance to improve once/project). I don’t claim that waterfall is exactly what hardware teams do, but it’s close’ish and the task based planning certainly does widely apply.

The agile model, on the other hand, is an iterative process where each iteration culminates in production ready code equivalent to some subset of the end product, meaning you can quit and release whenever you and your customer want. Progress is measured based on which features are complete at a point in time and lessons from each iteration are applied to the next (i.e. you have a chance to improve several times/project).

These are obvious differences, but the underlying message is never quite apparent until we get to the visual. That’s where people can see that the critical difference is in the definition of DONE.

Let’s say a team following a waterfall model is half done building an XYZ, which to the team means their design documentation is done, the design has been coded and they’ve built some of the verification infrastructure. Lot’s of good stuff there, but because the team has based its progress on predefined tasks they’ve completed as opposed to what they know actually works, half done can be pretty misleading. The documentation they have could be severely out-of-date, the design is probably somewhere between buggy and dead in the water and the verification environment might not even compile yet. Needless to say, the second half of the project is going to take much longer than the first half did!

Contrast that with the the definition of done in the agile model. Here, progress is based on features. When a team says “we’re half done”, they mean it. With little warning, they could release a product with half of the intended functionality. They know it’s done because it’s been tested, it’s been through the back-end, the software has been written and it’s ready to go.

Two different ways to measure progress; two very different meanings for the word DONE. To me, its this visual contrast with the waterfall and a redefinition of what it means to be DONE that helps the value of agile development really stand out.

neil

Q. How do you measure ‘done’? Code that’s written or code that’s production ready?

Agile Requirements: Are We There Yet? Part 1 of [Not Sure Yet]

I’m at the start of a new project. We’re currently determining what features are needed in the end-product. This has led to me to thinking a lot lately about how to capture, prioritize and track the completion of these requirements as we progress through the project. Of course, I want to do this in an ‘agile’ way.

The typical agile method is to capture requirements in a set of high level “users stories”. User stories typically provide a concise way of describing what the system will do from the user’s perspective. Several templates of a user story exist; Mike Cohn’s being the one that I think is clear and concise:

As a <type of user>, I want <some goal> so that <some reason>”.

It captures the who, the what and the why of a feature – providing some essential context to the assist in the implementation, validation and acceptance of a feature.

User stories are then typically split into progressively smaller use-cases, or tasks until you establish a firm understanding of what the feature is supposed to do, and how your team is going to complete the functionality as the user expects.

I found Cohn’s template particularly effective when creating some tool or script that I needed to create because it is much easier to comprehend than a typical requirement statement of “The system shall….”.

But… if there’s one aspect of agile that I really think is difficult to translate into an ASIC/FPGA world, it’s the concept of user story. Perhaps it’s because I simply don’t have the experience in writing user stories. But I think it has more to do with my trouble in writing an ASIC story from a user’s perspective. In many cases, a feature does not even provide any meaningful output that is visible to the user e.g., some esoteric standards requirement about how an interface must be defined.

Then serendipity came through for me, when my former colleague and AgileSoC collaborator Neil Johnson was in town on vacation and wanted to grab a beer. He knew James Grenning , who happened to be in Ottawa at the same time to deliver some training, and invited him along too. I’ll cut it short by saying I had a very intersting conversation with James and Neil. You see James is an expert in Test Driven Development for embedded systems. In fact, he’s written a book on the topic Test Driven Development for Embedded C. It was very interesting to discover the similarities that exist between embedded system design and SoC development. Including the perceived and real barriers to adopting an agile flow in an embedded design.

To cut to the chase, we got to talking about user stories and how they don’t really fit into the ASIC development, when James suggested that he calls user stories “Product Stories” instead. While its a simple semantic change of “user” to “product”, for me it was a lightbulb moment. It helped me identify the issue I’ve been having with ‘user stories’: I’m not implementing something for an end-user; I’m developing a product that will fit into a larger system that will ulimately be seen by a user.

Like User Stories, Product Stories are intended to be very high level descriptions of the features that you want to develop along with the acceptance testing that proves the feature is what you expect.  The key is that these are short descriptions from the system’s perspective. Instead of the whowhat and why as in a user story the product story for ASIC development should include the where,what and why. Specifically, a good requirement contains the following elements:

Behaviour: (the what) A short descriptive story explaining the behaviour at a high level.
Justification: (the why) Some idea on why this is important to the system (unless it’s absolutely clear e.g., some standard protocol);
Context: (the where) Where in the system this feature is needed
Acceptance: simple set of acceptance tests to prove that behaviour is correct.

I’ve been developing the following template, that I’ll be considering to use for my Product Stories, which is similar to the ‘user story’ template above.

“The <context> needs to <do some behaviour> so that <some reason> “.

A product story provides some clear guidelines on the functionality expected, as well as a clear indication of what coverage points must be captured before you declare the product story complete. In fact, I think it might be a good idea to stating these acceptance criteria as functional coverage points.

As I work through these concepts in my head, I’ll attempt to describe them in in my next few blog posts.  I’ll talk about how I see these Product Stories being created, developed and tracked in an AgileSoC world.

I’d welcome any comments on this post, and especially how you are doing your requirements management – especially when requirements are very fluid (which, I’m guessing, is probably most of the time for most of us :-)).

 

The New ‘A’ in EDA

I’m sure the suspense is killing you so I’ll just come out with it. EDA now stands for ‘Electronic Design Agility’. ‘Electronic Design Automation’ is gone forever.

That’s official. There’s no going back. Here’s why…

The huge number of variables, unknowns and risks that have become part of modern SoC development create multi-(multi-)dimensional problems so complex that forming a viable solution through planning and analysis alone is simply not possible. The right solution can only come out of an agile, iterative design process that relies on feedback cycles connecting developers to each other, the development team to the fab, the hardware experts to the software experts and most importantly, the development team to its customer.

It was heartening to read support for the idea that agile is part of the future of EDA in a post by Richard Goering from #48DAC in San Diego. In a post covering a keynote talk by Gadi Singer [Note: I originally had ‘interview with Gadi Singer’ here but Richard pointed out that it was in fact coverage of his DAC keynote. Richard: Thanks for setting me straight!], vice president and general manager of Intel’s SoC Enabling Group, Singer mentions agile methods as being part of an ‘Immenent EDA Transformation’:

“This is about having lots of feedback loops starting very early,” Singer said. “For those of you familiar with software methodology, it is about agile practices, not waterfall practices. With agile, you assume you have to make iterations, and every time you learn more you make changes. You have learning and validation cycles all through the design.”

Obviously I wouldn’t be here if I didn’t agree with that opinion but just because one person says it’s so doesn’t make it so. To get to where I am now, I had to challenge myself with a few questions to convince myself of the possibilities…

  • When was the last time we finished a project on time and on budget when all the planning, estimating and scheduling were done up front?

OK, for a lot of people I could just stop there. I have no stats to back it up, but I’ve posed that question a few times asking for a quick show of hands. I never see that many go up so I’m convinced it’s rare. That’s the red flag that tells me something is just plain wrong with how we go about getting things done in hardware development.

In case you need a few more red flags…

  • When design documentation so rarely represents a completed design, what sense does it make to finish it before I start writing code?
  • If I’ve never accurately predicted the 3-day task I’ll be doing 4 months from thursday, why would I continue to build detailed schedules that go out 6 months or longer?
  • How can I say code is done before I’ve seen it work properly?
  • Why is feature creep soooo bad? Do we really think that the features we start with will be what the market needs 18 or 24 months down the road?
  • If design specifications are so long and difficult to comprehend (assuming people actually read them in the first place), why do we keep writing them the way we do now?
  • If the day actually does come when I am hit by a bus, is my test plan really enough to tell someone exactly what I’m doing? Or would it be better to just show people what I’m doing as we go along?
  • Why wouldn’t we test, integrate and release code in small batches instead of doing everything at once?
  • What good comes from sheltering the development team from its customer? Wouldn’t it be better to hear about their problems first hand?

A lot of what we do from a process point-of-view just doesn’t make sense anymore so if you find yourself nodding your head as you read any of these, you’re ready to become part of the imminent transformation Gadi Singer predicts.

For a long time, electronic design automation has been used to describe an entire world of tools that we use daily in semiconductor development. But the paradigm that revolves around automation as the solution to modern design problems is severely out-dated. Being agile and having the ability to react to situations we can’t possibly predict has, without a doubt, become more important than automating processes we can predict. It’s time to update the paradigm accordingly.

So there it is… ‘Electronic Design Agility‘ and the re-invention of EDA.

Like I said, that’s official and there’s no going back. It’s time to be agile!

neil

Q. What outdated practices have we been clinging to in hardware development even after they’ve repeatedly failed us?

UVM Still Isn’t A Methodology

A few months ago, I posted on an article on AgileSoC.com titled UVM Is Not A Methodology. The point of that article, was to encourage people to break away from the idea that verification frameworks like UVM truly deserve the label ‘methodology’.

In the article, I argue that to be called a methodology requires a number of other considerations go beyond the standardized framework and take into account the people using it:

  • A Sustained Training Strategy
  • Mentoring Of New Teammates
  • Regular Review Cycles
  • Early Design Integration
  • Early Model Integration
  • Incremental Development, Testing and Coverage Collection
  • Organization Specific Refinement

That article generated a lot of interest and a few comments. I got some complements from some people and some minor disagreement from a few others. I’ve summarize a few below.

Neil, you’ve outdone yourself. Excellent article IMO.

We’ll start with a good one :). I got that comment from a friend, a real expert in functional verification and a person I learned a lot from when I decided to specialize in functional verification. I really appreciated that.

Neil, UVM _is_ a methodology, but not in the same sense as you have described in your blog post. It is a methodology that provides a framework for designing and building testbenches… You are correct that it is not a complete verification methodology covering all the things you list. All of those things certainly have to be addressed by each verification team, but can they be standardized? Training, mentoring, and reviews are essential to a good verification methodology, but it would be difficult to provide meaningful standards beyond what is already in the software engineering literature. (read the unedited comment here)

That isn’t totally agreement but I took that comment as an acknowledgement nonetheless that in order to be complete, a methodology has to go beyond the framework to include some of the things that I identified. The fact that they can’t be standardized is a good reminder that teams using frameworks like UVM need to worry about the extra stuff themselves. No one can do that for them.

The article makes some valid points but falls into the same trap that I think a lot of people do; i.e. getting confused between the UVM code library and UVM the methodology … My personal view … is that the customer teams who insist on self-learning the eRM/URM/OVM/UVM library or the (e, SC, or SV) language and won’t ask for training or guidance are the ones who end up getting the least from the methodology…These same users achieve far less reuse and randomisation than they should be getting, and the effectiveness of their testing is much lower than it should be. Where customers have invited me in to help architect their flow … we’ve created some really nice modular environments that are reusable, maintainable and adaptable. These customer teams have become more productive and have then propagated that methodology experience to other teams in their company, much as the article describes. This is the real UVM, where code meets experience … I’m saying this not to boast about…my own abilities, but to stress the point that getting all fired up about a code library isn’t going to make you more productive or effective, there’s a whole lot of hard work and care goes into really building a good UVM testbench. (read the unedited comment here).

Again, that comment isn’t total agreement but it does highlight the fact that UVM is larger than a code library and that mentoring plays a big part in how effective teams can be when using it.

The last paragraph of that article states it properly. It is a framework you can build a methodology around, but it is not a methodology in and of itself. The fact that it took the author 1600 words to state that simple fact indicates the article is mainly marketing fluff from a consulting company trying to scare managers. Much like calling UVM a methodology is EDA marketing fluff to convince managers it is more than just a framework. (read the unedited comment here)

That one was my favorite! Complete agreement followed up with the accusation of fear mongering!

Always nice to see people taking the time to comment 🙂

neil

Q. What do you call UVM? Methodology? Framework? Something else?