Time to Blow Up UVM

Just as it seems the verification community has converged on a methodology – a universal methodology as it were – that finally meets the needs of EDA vendors, IP providers and users, along comes somebody to suggest we should blow things up entirely and start again.

‘Somebody’ is me.

Here goes.

This post was set in motion by DVCon coverage via twitter last week and a snapshot posted from a UVM talk.Β I’m pointing out this snapshot because I think it provides an opportunity for a very constructive conversation around the future of UVM; a conversation that I think needs to happen (To be clear, I have none of the context around the photo. I know nothing of the talk it came from – topic, presenter or otherwise – and I’m definitely not doing it to slight the presenter – for the record, I think it’s an effective slide for the point I’m assuming he/she is trying to make).

Owly Images

What does this slide say to you? To me, it represents the peer pressure verification teams are currently facing to conform to an over-engineered UVM standard. Over the last few years, UVM has been shown as the one, natural, heavily promoted, near unanimously supported option for industry standard testbench development. Discussion within functional verification circles revolves around UVM for the most part. If a team hasn’t yet adopted UVM – or done the hard work as JFK might suggest – they’re planning on it. And if they aren’t planning on it… well… maybe they’re just not that interested in reusable, interoperable, industry standard testbench IP!

That’s what that slide means to me and my initial reaction was a quick twitter rant. With the rant out of my system (almost), it’s time to get constructive here. I want to propose a new direction, a radical new direction as a friend pointed out, to encourage people to start talking about alternatives to the UVM monolith.

Screen Shot 2013-03-04 at 2.23.55 PMScreen Shot 2013-03-04 at 2.24.02 PMScreen Shot 2013-03-04 at 2.24.09 PMScreen Shot 2013-03-04 at 2.24.16 PMScreen Shot 2013-03-04 at 2.24.25 PMScreen Shot 2013-03-04 at 2.24.34 PM

I call UVM a monolith because that’s exactly what it is. It’s one big package that includes everything. I believe it’ll continue to include everything so as to eventually – if it hasn’t already – transform itself into a giant frankenmess of interwoven compromisesΒ and good intentions. Case in point: the UVM register package. Great idea and a perfect opportunity for modularity. Alas, it was dumped into the same uvm_pkg to become part of the monolith.

Now, you might be saying “just because it’s part of the uvm_pkg doesn’t mean the register package isn’t modular”. I don’t completely agree but that’s correct’ish. It does stand well on it’s own and from what I can tell the dependancies on the register package are limited to macros. If you don’t want to use it, the only impact is extra code through the compiler. That also goes for the TLM1 and TLM2, most of the monitor and scoreboards, most of the miscellaneous odds and ends (side note… who’s using the uvm_heartbeat? anyone??) and theΒ deprecatedΒ features that have yet to be dropped altogether. The same is not true, however, for the sequencer or the config_db or the reporting or the transaction recorder or the factory. Those are there whether you’re using them or not – like it or not. If you think the sequencer is hard to understand, the config_db dangerously opaque, the reporting over-engineered and the recording less than useful, they’re an integrated part of UVM and you can’t really get rid of them.

Which brings me to an alternative. I’d like to propose theΒ the Open Verification Platform (OVP).

The OVP is specifically not universal nor is it fully featured. It is the minimalists starting point for modular testbench development.Β The value of the OVP is that it’s a platform –Β there is nothing methodological about it (see: UVM Still Isn’t A Methodology) – on top of which users integrate or build the components to meet their development objectives. I think of UVM as a solution that people apply to their problems. The OVP is a platform on top of which people create their solutions.

To give you an idea of what the OVP could look like relative to UVM, one option would be to start with the entire UVM library, then remove everything except the uvm_component and uvm_transaction. That leaves you with a base component, base transaction and defined run-flow; the 3 critical components of a testbench. No more. There’s your platform.

Is this a joke? What do we do with everything else? Surely you’re not suggesting we throw years of effort away?

Of course not. What’s left would be divided into a list of plugins that are compatible with the platform. You’d have the sequencer plugin, the config_db plugin, the register plugin, TLM1/TLM2 plugins, scoreboard plugin(s), etc.

As for the open part of the OVP, open means it’s user driven soΒ anyone can publish plugins no matter how complicated, simple, excellent or completely lame they might be. Point is, the OVP is whereΒ users drive the experience with their ideas. I would expect creativity and competition in plugin development. While standardization is quite obviously not the objective, eventually some level of conformance could arise as great ideas emerge and plugins find a user base.

Finally, believers of the UVM can take comfort in the fact that the UVM could be entirely rebuilt on top of the OVP. Just take all the OVP plugins and… uh… plug them back together. So if you love UVM, carry on. If you don’t, you get the chance to take a step back and decide for yourself how to go forward and what, exactly, you’re taking with you.

If you’ve made it this far, congratulations. I know I’ve been a bit rant’y in the past when it comes to UVM so my intention in this post was to at least temper the rant with something constructive. Also, as I’ve said in the past, I’ll repeat that it’s not my intention to berate the effort that went into creating UVM. I do, though, think that if UVM is to be a long term solution, it’s going to need to take a different form as we refocus on and re-emphasize a user-centric solution to testbench development.

Picture 13To sum things up, I’d like people to view the UVM and the OVP as opposities; one is a kitchen sink methodology that stresses standardization and interoperability, the other is a radical, opt-in style free-for-all where users decide.

With that, I’ll leave it to you to tell me what you think. Am I on the mark with an open verification platform? Is it the right way forward or is UVM fine as it is? Most importantly, is there anyone out there with an equally radical idea that needs to be heard?Β I’m sure I already look like some crazy idiot verification hereticΒ so if you’re shy about sharing ideas, now might be a good time to speak up :).

-neil

47 thoughts on “Time to Blow Up UVM

  1. Well said. I wholeheartedly agree!

    Actually, if you look at the software world there is a similar monolithic framework vs. library of plugins debate that is continually going on. One example is with web frameworks (e.g., Ruby on Rails and Django). There are big monolithic web frameworks that include everything: html templates, database abstractions, url routing, etc. etc., and then there are little libraries for each of those individual things that the monoliths provide. Judging from that, I think a modular library of verification stuff like OVP will probably need to exist side-by-side with monolithic UVM. That’s fine, as long as there is enough interest and contributions to both to keep them moving forward.

    I’m really interested in the part where you say that OVP will be user driven. How do we get corporations, consultants, and individual verification engineers to contribute open source code?

    At DVCon, Stan Krolikoski from Accellera made an impassioned plea for more people to join Accellera and contribute. Someone asked what it takes to join and the Accellera people that replied couldn’t really give a straight answer. I looked it up online and it seems only expensive corporate memberships are available, along with talk of committees and voting and other bureaucrat stuff. Too burdensome. Free resources for open source code are plentiful: Github, Bitbucket, Google Code, Sourceforge, you name it. No dues or committees necessary!

  2. Hey Neil…

    I’m actually conflicted on this one. While I agree with the overall thesis of the rant, I’m not sure that there isn’t room for both solutions — similar to some of Bryan Murdock’s comments above.

    Here’s what I’m thinking: Way back when I was but a software lad using C/C++, there came along the STL (Standard Template Library) for C++. Admittedly, this was pretty modular so it’s not the best example, but it grew year over year into a substantial library of modules where most of them you’d never need. But when you wanted to have a particular functionality it was likely there. The C++ Boost library fulfills a similar role today (and I imagine other libraries I’ve never heard of).

    My point is that there is value in having a ‘goto’ standard of handy modules that perhaps contain more than you need — simply because you don’t have to re-invent the wheel, and you’re OK with any inherent restrictions and ‘bloat’ because they get you going fast.

    Using the other Bryan’s example of web frameworks is a good example here: I’m OK with using Django, even though it’s quite a bit more complicated to learn, and does put a ‘box’ around what I’m allowed to do… but I gain by having a fully functional web site up in hours and not days.

    Similarly, there are definitely opportunities for minimalist libraries that allow you to build what you need from a minimal framework. Using a KISS principle is always a good thing.

    So I think something like your OVP would appeal to many teams where most of the hard-work that needs to be done is *way* above the UVM layer, and they don’t need all the UVM-baggage. At the same time, I also think there’s room and need (and demand) for a monolithic UVM library. Different teams with different needs.

    Now for my rant-on-a-rant: my chief whinge about UVM is that it assumes you’re developing in SystemVerilog (yes, there’s UVM-e for Specman, but they are different in some respect).

    OVP does not need to be SystemVerilog specific. Ideally, in my opinion, it should be language agnostic. The OVP should specify the functionality required and a high-level API of functionality (with some basic assumptions about the properties of the implementation language e.g., being object-oriented or functional programming).

    The value of this is:
    * creating a truly lingua franca (common language) for developing verification platforms — many of the concepts from UVM could be extracted and generalized.
    * ability to work in whatever programming language is the best choice for the job at hand e.g., we could use a Python OVP to capitalize on some funky Python library that will save us a bunch of time.

    Thanks for a thought-provoking discussion.

    1. Just to refine this point:

      UVM […] assumes you’re developing in SystemVerilog

      Not only that; it assumes that you’re developing in SystemVerilog 2009.

      While we’re all tremendously proud that our production language has caught up – sort of – to Java circa 1995, it puts us in the curious position that our production library/framework/kitchen-sink is written in a subset that dictated certain idioms that avoid new language features. It would be written differently, and support a different API, if it were implemented today.

      I have a sneaking suspicion that this will not be the last three-letter methodology.

  3. I personally suspect that however “it” is built – UVM or OVP – people will wind up using the same complete set of features, because people will buy IP, and each different IP vendor will manufacture IP differently – some will use uvm_config, some won’t, etc.
    I’m thinking of how the Python install winds up at larger companies – all kinds of packages get installed because somebody needs to use some software in them (or that uses them), and nobody ever cleans it up because, frankly, it doesn’t hurt having all those packages there, and there’s more important problems to solve.

    Configuration-management is always a big problem when lots of independently-written packages are supposed to all play well together – you have packages A and B, each used by different IP vendors, and they each require incompatible versions of package C. UVM avoids that by doing monolithic releases.

    I would like to see the process improved, before lots of effort is put into creating another methodology with equivalent functionality. For example, while it does cost money to join Accellera, one can always create a separate project (sourceforge?) where parts of UVM are replaced/prototyped, and then Accelera can pick it up and plug it in. For example, replace uvm_config_db with something better, and people will start to use it, whether Accelera approves or not. And eventually Accelera will get pushed into adopting something that’s faster, because the members will be tired of overloading their standard UVM installs.

    I would also think that education will help push Accelera along: if there were a set of UVM models, testbenches, etc. for some of the OpenCores blocks, plus how to use them, then people would go download them, see how “things are done” and copy those best-practices. About 80% of people learn from examples, not manuals.

    In summary – I’m all for whatever works. I’ve been through C to C++ to C#, and it takes time for people to learn what works and what doesn’t – usually the people defining the standards are at a much higher level of abstraction/reuse, and they often neglect practical issues like runtimes, how well their approach works in really big designs, or how hard it will be to simply write compilers that generate good code.

    I would be interested – I wonder how much faster UVM might run, if a large chunk of it was ported to C? Simply because C compilers are a lot more optimized than SystemVerilog compilers. Has anybody looked at that?

    Keep up the good work! You’ve given me lots to think about!

  4. Good comments guys! This is exactly the kind of open discussion I was hoping for. Interested to hear what others think about different forms for the UVM, different usage models and opinions on tight integration and standardization v. a modular opt-in arrangement.

    -neil

    1. Neil,
      To build anything large, complex (and stable) in a reasonable period of time takes a team of dedicated, talented resources. The only organizations that have them are large corporations, and they’re only going to invest into things like OVM, UVM, etc. if they see ROI. And they won’t use something that’s unstable, or seen as having unpredictable levels of support, unless it’s a throwaway free tool that works “out of the box”, with low barrier-to-entry like Perl or Python or whatever.

      So – whatever you come up with has to meet one of two criteria:
      a) if it’s large and complex and has to be stable, you have to have a good business-case for the big companies to invest into it. And they’ll take control of it and steer it where it makes most sense for them (not for you).
      b) if it’s not large, complex and is not mission-critical, then you can get a group of volunteers to work on it. But then it has to be interesting enough for those volunteers to want to work on it, for a longer period of time. and of course those volunteers will only take it in the direction that interests them.

      Or – as the saying goes, “If you want it done right (or at least, the way *you* define ‘right’) you have to do it yourself” πŸ™‚

      Erik

      1. Option (b) is definitely the only direction I’m interested in traveling. I’m assuming there are people out there that are ready for a deviation from the string of verification methodologies from over the past 13’ish years. Each (in my opinion) has been a refinement/optimization of the last and each (also in my opinion) has fallen short of delivering the assumed benefits. It’s good stuff, just not as good as everyone has hoped for.

        Now if people are ready for an alternative and they’re ready to pitch in, OVP potentially becomes a great idea that benefits a lot of people. If people are ready but *don’t* have the time to pitch in, they’re going to be left waiting a while! Of course, if I’m completely wrong and it’s only me, I still know at least one person will benefit from the effort I put into it so I see it as a low risk path to progress :).

        -neil

        1. Neil,
          OK – so how about creating something at sourceforge for OVP? You’ll need to then define some bite-sized, well-defined tasks so anybody who’s going to help you has an idea of what you’re looking for, and also so you can maybe knock out 1 or 2, to give people a concrete example.

          Erik

          1. Agreed! Code talks more than blog entries and comments.

            I would suggest something a little more modern than sourceforge (though I hear they have been making some changes). Github is all the rage these days, but Bitbucket and Google Code are nice alternatives too.

  5. You totally missed the point. UVM is just the symptom of the a much bigger problem, SystemVerilog is a screwed up verification language. We should blow up SV and start with something better.

    1. What do you propose?
      It will need to be a language that is already well-defined, with lots of years and users that have proven it out, plus lots of libraries. People are not going to go to the work of defining a new language unless there is a significant benefit, and given how the standards process works, you might get out something just as good (or bad, depending on how you look at it).

      Then there are performance issues, portability across OSes, and availability of development chains for vendors and customers.

      My personal suggestion (just to put something out there): C# – it’s available in open-source (libraries and compilers and IDEs), .NET is huge and tested, there are millions of users who either know it or can learn it (from C++), and it’s fast enough and reasonably well structured for building really huge projects (like chips).

      Of course, I’m sure there are other languages out there, including SystemC. My mental model is: use Verilog (only) for RTL simulation, and C# (with a UVM-type class library) doing all the class-based stuff.

      This means a lot of C++/C# developers (and existing libraries) can be used for verification. There will of course be a performance hit due to the SW interface, but buying a LOT of Verilog licenses and running in parallel will solve the performance issue for just about everyone.

      Just my 2-cents, and I’m sure there’s lots of better alternatives out there. I’m just looking for something that’s the best I could hope for, given reality of politics and economics.

      1. hevangel, well said!

        Erik, very good job pointing out some important things that should be considered when choosing a better verification language. C# on Windows with Microsoft tools is great, not so much on any other platform with other implementations. Since most ASIC shops develop on Linux boxes, and in general have (well founded) distrust of single vendor solutions, I don’t think C# is the best choice.

        SystemC is not a language, it’s a library for C++. I have very limited exposure to it, but it seemed at least as awful as SystemVerilog when I looked into it πŸ™‚ You would, however, need some sort of verification library to go with whatever language is chosen and maybe we could learn some things (not to do?) from the C++/SystemC model.

        My instinct is for a very high-level, programmer friendly language like Python. It has been around a long time, it has lots of libraries (even constraint solvers), and there are a lot of users and it is well proven.

        Speed would probably be an issue as you say, but I think you are on to something. If the synthesizable RTL stays verilog and the testbench is written in , then you could probably even get by with one of the open-source verilog simulators and not have any licence cost at all. The price of a single full-blown SystemVerilog simulator license can buy two or three really nice multi-core linux boxes. πŸ™‚

        1. Bryan,
          Hi – you can do C# development under Linux using open-source with an Eclipse IDE. Take a look at Mono: http://www.mono-project.com/Main_Page

          As far as SystemC – yes, it’s a library on top of C++. Where I was going was that UVM is a library on top of SystemVerilog, SystemC is a library on top of C++. But in the end, both are just (to the user) an API that’s used to build chips (or verify them). The user doesn’t really know/care what’s just under the hood. But, there are some huge differences in what’s possible between SystemC and UVM, because of their starting frameworks.

          I think we’re working along the same lines: SW is expensive, HW is cheap, so if you can find open-source SW, you can run lots of slow jobs in parallel and get the same result as a few fast jobs. Certainly if people can develop an open-source C# system (like mono) then one could take Ikarus Verilog (or equivalent) and build a Verilog-only simulator (or just pick a subset or strict implementation, to make it easier – if you want free, you have to clean up your code!). If done right, then it could even be developed in C#, and at that point, it’s just another C# library that gets compiled in. And the synthesis-compliant subset of Verilog is pretty small.

          Writing a C compiler is about a 2-quarter class for a single student in college – I did one. Also – one could even take the SystemVerilogEditor (SVEditor) plugin for Eclipse and use it as the parser/symbol-table/error-detector, and generate C# from whatever it says is legal code. Now you’ve saved all that input verification work, and gotten a free IDE thrown in! πŸ™‚

          Given that IP-XACT is the coming way of storing IP, and IP reuse is the name of the game, I would define my verification library in terms of IP-XACT – UVM and IP-XACT differ in subtle (but painful) ways. I’ve been working on a IP-XACT -> UVM generator, and there are translators between IP-XACT and Verilog/VHDL. So a lot of the infrastructure is already there: it mainly needs to be stitched together.

          Erik

        2. Speed would probably be an issue as you say, but I think you are on to something.

          “Speed” has often been raised as an issue when discussing Python as a verification language. I think it’s worth delving into further.

          The objection that execution speed would be too slow is generally based on assumption about an interpreted language rather than experience. I need to do some benchmarks but having used Python in place of UVM for a new development the execution speed is not noticeably slower and in some cases considerably faster. I put this down to potentially inefficient UVM coding but also the cumulative number of hours that have been spent optimising Python vs the (relative) immaturity of the various system verilog implementations.

          Secondly as you point out, slow execution speed can remedied with faster hardware (or investing effort optimising code or writing C extension libraries).

          All of this has to be considered against the improved productivity benefits though. I’ve seen UVM packet generation libraries that grew to multi-thousand line monsters that took months to write replaced by the Python scapy library and a few 10s of lines of code. This doesn’t just save time but massively simplifies the environment making it easier to debug, maintain, add engineers to etc.

          1. Chiggs,
            Hi – yes, there’s a HUGE value in having something working now, using code that’s been tested and is known-good.

            As far as C# goes, I would use Mono, since that’s open-source and runs on all platforms. Which is why I like it (and Python – they’re OS-agnostic and free). My reason for C# is that there’s a very large library of software out there as well, and one can always invoke C# from Python (and vice-versa). For really big software projects where quality is really required (such as ASIC verification), I prefer strongly-typed languages over interpreters – you want to catch all problems at compile-time, not when the customer on another continent has provided some data to your interpreter that uncovers an extremely subtle bug. It’s more work up-front for the code to compile clean, but it’s got a huge savings in the long term.

      2. May I suggest Specman e?

        If you take a look at UVM-e and compare it to UVM-SV, you will see the beauty of the e language as a verification language.

        The only problem is we have to get Synopsys and Mentor to get on board and support a Cadence language. Not to mention even the revenue, market share, politics in the standard body, etc…….

        1. Or just end around the vendors and standards bodies with an open source implementation of e. Vendors could still sell support, similar to Redhat does with open source linux.

          1. Bryan – what about copyright issues? I’ve also heard a lot of people didn’t like ‘e’ – I’ve never used it, so I can’t comment. If you want a lot of people (heck, almost everybody) to adopt something, adoption issues have to be addressed.

          2. Erik,

            There’s no reply link under your latest comment, so I’ll just reply here. What copyright issues? If Cadence decided to open source their implementation of e, they own the copyrights (I assume) and they can do that if they want. If some other motived individual or group wanted to create an open source implementation, e is an IEEE standard and they shouldn’t have any copyright issues to worry about either.

            But that’s neither here nor there if e is not the right language. I too have very little exposure to e, but people close to me that have used it have not liked it much. You could be right about adoption issues. I personally don’t believe C# would get much adoption among verification engineers either. EDA and verification operates in a Linux world and using a Microsoft controlled product there just doesn’t make any sense. About 4 years ago I spent 7 months writing software at a Microsoft shop with Visual Studio and .net. I know mono exists for C# on Linux. I’ve tried it. It was pretty sad compared to Microsoft’s .net tools. Maybe it’s gotten better since then, but seriously, is anyone really using mono?

            Python, on the other had is fully open source and runs beautifully on windows or linux. It has been around longer than C# and is more popular according to a few different measures:

            http://www.langpop.com/
            http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
            http://redmonk.com/sogrady/2013/02/28/language-rankings-1-13/

          3. Bryan,
            I don’t know why the ‘reply’ didn’t come through.
            Runtime, stability and speed-of-debug are significant issues in Verification, and Verification is 80%+ of the work (I’ve read one wants 3-5 verification engineers per designer).

            For large projects, this pushes you towards a strongly-typed language, because of the stability issues (and detecting issues at compile time, rather than runtime). And of course the language HAS to be compiled to meet runtime issues.

            Additionally, because the cost of debug at each higher level of integration is 10X the cost, one is pushed (shoved?) into unit-testing as the only way to meet schedule.

            This adds requirements for solid tools for code-coverage, functional coverage, etc.

            Python is an outstanding scripting language, and I have used it for many things, but it does not meet the strongly-typed and high-performance requirements of large-chip design.

            I agree with the adoption issues with C#, but I suspect designers who are frustrated with SV+UVM (and companies dealing with the frustration of IP-XACT and UVM having “impedance mismatches”) would be willing to redo some learning if their lives become signficantly simpler, or their development costs/schedules drop.

          4. Careful there. Python is strongly typed, but it is also dynamically typed (not statically typed). You can’t do 1 + “2” in Python like you can in other scripting languages, you get a type error.

            SystemVerilog is a strongly typed (sort of) and statically typed language. A large portion of the cruft in the UVM is there solely to work around the static types of SystemVerilog (untyped, unchecked strings in the configuration database, for example) and give verification engineers the runtime configurability and flexibility that they really want. C# is also a strongly and statically typed language and will likely require similar design patterns and frameworks to provide runtime configurability. A dynamically typed language has the runtime flexibility that we all seem to be striving for built right into the language.

            As far as tools go, Python has debuggers, linters, code coverage tools, unit testing frameworks, refactoring tools, you name it. It does generally run slower than most statically typed languages, but as you said, debug is one of the biggest bottlenecks in verification, not simulation runtime. Higher-level, dynamic languages such as Python allow you to achieve the same functionality and runtime configurability with far less lines of code. That means less lines of code to debug. We also agree that without costly software licenses limiting us we can run a lot more simulations in parallel to make up for slower runtime speed.

            Of course, this is all very academic at this point. My prediction is that the first open-source-with-commercial-support-available (like Linux and the UVM) language/simulator will win, no matter what languages it supports.

          5. My comments re: development tools wasn’t pointed at Python – it was a general observation (for whatever language is picked). I know people who tried to use Python for chip verification – it didn’t work, for reasons that I listed. And if it worked well, I think we would read about people using it. As I said, it’s a great language, but its intended purpose doesn’t align well with chip verification.

            the uvm_config_db has a long list of issues (including runtime), so I suspect most people are trying to move off of it whenever possible. Something equivalent will be done in any future language.

        2. hevangel,
          You brought a smile to my face – I can just imagine the looks on the faces of people in Synopsys and Mentor marketing…

          That’s why I was trying to use established languages as a base:
          For real hardware: Verilog (just what’s synthesizable, plus Interfaces for connecting to verification environment).
          For verification language: C#
          For verification library: compliant with IP-XACT (first), UVM (second). But I’ve tried to split things so there’s as little conflict as possible.

          and of course one gets all of above using open-source Linux tools for students or companies without money, which will put price-pressure on the big EDA vendors.

          Erik

          1. Actually, SV is not beyond hope with some major overhaul. For start, here are some ideas:

            1. fix inconsistency in the syntax, e.g. curly bracket vs begin/end keywords, missing semi-colon, etc;
            2. add AOP (aspect oriented programming) support;
            3. Encapsulate those $ functions in C++ like library fashion;

        3. I have used both System-verilog (actually I’m using it in my current TB) and Specman -e. My personal opinion is that e is a more natural language. Writing code in e is fluent. The language does not stand in your way when you construct things. You get to focus more on what you want to do and less on how you do it.
          System-verilog needs a higher learning curve, but it can also get the job done.
          About UVM, I think you have to take account also about the quality of the code. If everyone uses their own methodology it is harder to be sure than the verification engineer has actually verified the functionality.

          1. Unless you’re trying to manipulate multi-dimensional arrays. That’s when e gets really annoying, but other than that you can get more stuff done with less code (I too have been bit by the Specmaniac bug).

  6. Interesting discussion.

    I think in the thread above, there’s a lack of differentiation between good technical solutions, and good practical solutions. The two are often at odds.

    On the one hand, the industry wants an ecosystem where verification engineers are commoditized – interchangeable cogs that can be replaced as needed. That market force pushes towards technologies that make run-of-the-mill engineers as productive as possible. As chips get bigger, so do the teams that build them, and staffing large teams means that you have to hire a lot more engineers in the middle 80%. So going forward, the pressure towards well-understood technologies will continue to increase (although other pressures may start to arise with increased complexity, too).

    In the software world, that language is called Java (which sometimes pretends to be something completely different, and goes under the pseudonym C#), and it has been a massive success, even if most high-end software engineers don’t think all that much of it. The market says it works.

    C# is clearly a better general-purpose language than SystemVerilog, but it is not nearly enough so to give it enough advantage to justify a switch – even seen through the lens of just a general-purpose language. Coupled with the specific verification-centric features of SV, as inflexible and poorly-spec’d as they are, there’s nothing to gain here. How are you going to deal with something as pedestrian as 4-state logic in C#? Hint: look at what SystemC does. You’re unlikely to do much better.

    I suspect that Specman is destined to remain a niche language for the market reasons I mentioned. I agree with hevangel that it’s a better verification language than SystemVerilog , but I’ve never really felt that it was worth my while to become as proficient in it as I would like to be. Becoming a good Specman engineer, I suspect, is a skill of limited value outside its niche. I’ve joked to my colleagues that Specman is the COBOL of the verification world – there will always be demand for old hands to come out of the wood-work and save the day at a good daily rate.

    The other side of the dichotomy is much more interesting, and there’s much more scope for wilder solutions here. I think there are much better ways of doing things – radically different ideas that challenge our ideas of what the entire design flow looks like – and I think they could present an interesting opportunity for a small dedicated team to capitalize on. But I think it’s unlikely that they’ll have much of an influence on the industry at large.

    Sad truth is – I think SystemVerilog and UVM are here to stay for a long, long time.

    1. At least for me, what I see is a separation occurring between RTL design (thinking in terms of clocks, power, area) and event-driven verification (thinking in terms of queues, semaphores, concurrency).

      In the event-driven world, there is no such thing as four-state logic – there’s data, and anything other than 0/1 is detected at the Interface and converted (or an error is generated). There are a lot of people in the world trained in this sort of design and verification.

      There are few people in the world trained to design in terms of clocks, power and area. There are even fewer who can do that PLUS do the event-driven world. But there are LOTS of people trained in just the event-driven world.

      My proposal is to leverage that – don’t try to require the 4 out of 5 ASIC people (who are doing verification) to be conversant in both. The company that is able to access the much larger pool of people trained in SW verification will have a great leg up on companies requiring their verification folk to speak both domains.

      But that company will need a methodology, an industry-standard language, and a set of tools to enable that leverage.

      1. You seem firmly entrenched in the “how do we make the body of interchangeable resources as large and as productive as possible?” camp. Nothing wrong with that; it’s what most of the market wants.

        I just don’t find it very appealing to be amongst the 80%, but with most of the ideas discussed above, having above-average skills (in my humble opinion, of course :)) doesn’t always reflect in my overall productivity, and I believe that’s because of the tools and flow we use. SystemVerilog isn’t the only problem – I’d kick Verilog to the curb, too – at least in its present form. That opens up some interesting possibilities.

        My answer to “how do we maximize the productivity of the 80%?” is “encourage them to go and work for your competitors.”

        Instead, focus on the 20%. Reduce your team down to a small, highly-competent core, and use a host language and ecosystem that affords them a decent baseline of tools, and the ability to develop their own that integrate seamlessly into their workflow. Python may be one, but there are others, and some may be better.

        (BTW, I have never heard anyone advocate 5 verification engineers per designer. I think I’ve heard as many as 3:1, but in my experience, few shops manage even 2:1)

        1. There’s a very good matrix for hiring:
          ineffective effective
          good
          bad

          Small companies can only hire the good/effective (or they go out of business).
          Big companies can’t get lots of the good/effectives, first because there’s not enough of them, and second because they can’t provide the compensation and creative work environment that good/effectives want. Big companies then are stuck with avoiding hiring the bad/effectives, and thus hire the good/ineffective and bad/ineffective.

          So – if you want large companies to adopt something, it has to be usable by the ineffectives. In other words, they need to have small groups of good/effectives build stuff that’s reusable by the huge armies of ineffectives. And that’s why IP-XACT exists. Because it’s needed. Ditto for OOP.

          Companies that implement reuse (and I don’t mean a ‘bone yard’) will have huge profit advantages over those that don’t. And all the super-huge companies are well aware of it. So if you propose something, and you want the super-huge companies to put resources into making it happen, it has to be something that’ll help their profitability (through reuse) – either enable better reuse with same resources, or allow more resources to work on reuse. And if they see it *will* help, then they’ll put a LOT of money behind it.

          1. “So if you propose something, and you want the super-huge companies to put resources into making it happen…”

            That is, most assuredly, not my interest. πŸ™‚

          2. Gareth,
            My observation is: a large percentage of chip designers work at large corporations.

            There would seem to be some cognitive dissonance in wanting to define a new industry standard, yet also designing it so most people in that industry will not want to use it.

          3. “There would seem to be some cognitive dissonance in wanting to define a new industry standard, yet also designing it so most people in that industry will not want to use it.”

            I suppose there would be if I wanted “to define a new industry standard”, but I don’t, and never said I did. That was the entire point of my original message on this branch – I was drawing attention to the the fact that participants in this thread needed to differentiate between “improvements for mass adoption,” versus “ideas that could significantly improve the productivity of a small team, but would be problematic to mass adoption”.

            Perhaps I was not clear, despite alluding to it repeatedly.

          4. Like I said, “Blowing up UVM” to me sounded like developing something for wider use.
            But at least I’m clear on that now. So what do you have as a list of the issues, their priority, and what your thoughts are on better solutions?

          5. I like this comment. Too much thinking of standardization in hw dev, not enough options/creativity. Gareth’s comment pretty much sums up my interest in agile… giving people an alternative. It may work, it may not, and that’ll depend entirely on your team’s particular goals/team/history/successes/failures/etc. Helping all the people all the time isn’t practical so I’m settling for some of the people some of the time.

            -neil

          6. Neil,
            Innovating is OUTSTANDING. But the original topic is “blowing up UVM”. Since UVM is the de-facto standard, my assumption was that the goal really was to come up with something to replace it.

            That (to me) implied developing an infrastructure (language, methodology) that the entire industry (big companies, small companies, and individuals) could use.

            Is your actual goal more just to innovate and try out new things, with adoption by anyone else as a non-goal? I’m totally great with that – innovations from 20 years ago get incorporated into the standards of today, and I’m an “early adopter”/innovator. I’m just trying to understand what you really want to do.

          7. that’s right… back to the original argument. the goal was to refactor it. keep the functionality the same but the structure different (i.e. allow for the radical opt-in strategy while preserving the original intent). the people that like it can carry on with the refactored code base. the people that don’t can take what they like and leave the rest. encourage alternatives for people looking for an alternative.

            Re: innovation… you could probably look at my goal as being “undo-ing” innovation to allow others to innovate in new directions. standardization is definitely not my goal.

          8. clarification: industry-wide standardization is not my goal. we’ve got enough of that already.

  7. Resurrecting an old discussion…

    I was thinking that now that SV 2012 supports interface inheritance, it would be more realistic to implement your ideas of an opt-in strategy. Since you’re not fixed on just one base class, but get to implement as many interfaces you want, it’s now possible to mix libraries however you want.

    This could mean no more Swiss army knife UVM library, but a UVM base library (components and stuff), a UVM config library, a UVM sequencing library, etc. (or OVP equivalents).

  8. Just went through the article…

    Couldn’t agree more. UVM is my opinion has become so convoluted piece of junk framework that even original architects probably don’t fully understand and know how UVM works. They have built a layers upon layers of complexity that is unwarranted for. In the process they have provided hooks to bypass some of fundamental data abstraction of OOP. e.g Config – db , one of the worst possible mechanism.

    Factory implementation should be very simple and programmer generally can create his/her own factory patterns and other builder patterns. Why provide push button flow for it via macros, makes debug extremely hard.

    UVM is nothing but monkey see, monkey do in our industry. Best framework that I liked were testbuilder, AVM, simple and yet very effective. Provide enough for people to build on rather than spoon feeding them to the point of exhaustion.

    I believe much of garbage in UVM has come from Specman E, though I don’t know ‘e’, but my colleagues have mentioned that this is how e used to operate.

    Next project I am leading, will throw all the garbage of macros ,config db etc out of the door, simply rely on implicit phase mechanism, rely on TLM interfaces for people to stitch their own env, forget the UVC garbage. Plain old vanilla DV environment that is easy to maintain and manage.

    1. I’d like to just use something like C# for verification language.
      My reasons:
      a) it has all the constructs you could possibly want
      b) open-source libraries ( Mono is the open-source version of .NET)
      c) open-source IDE and compilers
      d) hundreds of thousands of programmers already know it
      e) vast numbers of books, apps, etc. to learn how to use it, and to aid in using it effectively.
      f) it’s fast.

      Why try to take a language that’s only supported by a few vendors, that costs a lot, has expensive training, and make it have all the features that existing popular free languages already have?

      1. Yep – or maybe JAVA. Heck I was comfortable with C++/testbuilder. Testbuilder was very thin and powerful library completely written in C++.

        Just as C#, JAVA has vast number of libraries, talent pool etc.

        Take a look at JOVE – someone started that project.

        1. Yes, Java would work as well!
          But it has a bad rep has a very slow tool, and “image is reality” in marketing.

          Erik

  9. SVTB is abomination. Incredibly thankful that I do have to wallow in it’s filth on a daily basis. Sadly, today was not normal day. Looking forward to the day when ‘e’ is finally donated, and as readily available as python or C++. All you “UVM-sv coaches” will pretty much be homeless. The fact that is even a thing speaks volumes to how bad it is.

    1. Based on the SW industry’s transition from C to C++ (and other OOP languages), a lot of HW designers won’t make the jump. So there’ll still be a market for people who “get” OOP. And there will ALWAYS be a market for people who get the bigger picture of testability, reuse, the dramatically increasing cost of bugs as they are caught at higher levels of integration, etc.

      From a practical standpoint: people should do HW/SW cosim with the production SW as soon as possible – simply to flush out misunderstandings and spec issues between HW, SW and system architects. That also greatly reduces the loading on the pure-verification engineers for coding the stimulus sequences and debugging them.

Leave a Reply to Tudor Timi Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.