How many times have you sat through a conference talk or tutorial delivered by an AE from one of the big three EDA companies, the speaker announces a new version of their simulator, and the first follow-up question from the audience is how much faster is this version? 2x… 10x… 100x… how much faster?
Of course the response is always an increase in speed but it’s also always hard to quantify. It comes with caveats (i.e. “this is what we’ve seen… you may see different”) and no one is ever quite sure what kind of code is being used in the benchmarking. I’ve never had a simulation runtime improve from 10 seconds to 1 second with a new simulator version – maybe other people have seen that, but I haven’t – so I’ve never paid much attention to the speedup numbers. Call me skeptical.
In contrast, here’s something I have seen… in UVM-UTest we have 541 unit tests that take 14.119 seconds to blow through. Yes, 541 tests in 14 seconds. These are unit tests running sequentially within the SVUnit framework against UVM with one simulator license on one machine that have been used to find real defects. If you want to run them for yourself, you can. Just follow the instructions on the UVM-UTest Getting Started page.
Another example that I have seen… in verifying some UVM-based testbench IP with a client last year, I ended up with 116 unit tests running within SVUnit that took roughly 12 seconds to complete. Not 12 hours or even minutes, 12 seconds. Again, one license and one machine.
As for what others have seen, I asked users of SVUnit whether or not they’re getting similar results. Here’s what one had to say…
OK, so one of my suites has 83 tests, and ran in 35 seconds.
BUT: a lot of those tests are testing a configuration of a packet (think Ethernet type of thing). Where a single test is for one configuration, and it has many nested FOR loops, walking 1s through all possible fields in that particular configuration, and for each bit:
- run requirements-check on the packet to verify generator doesn’t create illegal
- doing pack/unpack/compare to verify pack/unpack
So if I was to expand those out into individual tests, you’d be at hundreds.
That’s 83 tests with a lot of repeated functionality in 35 seconds. Quite good, no? I’ve got emails out to a couple other people so hopefully I can follow-up with more examples.
Update from an SVUnit user: Sure. Currently I have 29 tests of UVM components that run in 13 seconds. Using VCS 2011.12, it takes 11 seconds to compile, and 2 seconds to run.
Update from another SVUnit user: Let’s see what I’ve got… A suite of unit tests for a pretty beefy uvm_monitor: 35 tests, 25 seconds (fresh compile, 7 seconds on a re-compile).
I also have a simple, single unit test for a hardware module (no UVM involved) that runs in 1.8 seconds on a fresh compile, 0.6 seconds on a re-compile. That one blew my teammates’ minds. They thought it took flexlm at least 0.6 seconds just to check out a license 🙂
I’m guessing that for most people reading this, going from sims that take minutes to sims that take seconds would be quite an upgrade. Assuming that’s the case, the next time you’re waiting several minutes (or an hour… or more) for a simulation to run, think about how you’ve written your tests instead of waiting for EDA vendors to deliver a faster simulator. Fast sims don’t come from fast simulators, they come from fast tests. If you want fast tests, take a look at what’s possible with SVUnit. The results can be stunning but there’s really nothing magical about it. Fast tests run fast, that’s really all there is to it.
PS: thanks to the lads that brought up sim times as a major benefit of using SVUnit. We had a good chat yesterday about how we could help people get started with unit testing and the benefit of fast sim times has been something I’ve overlooked. I guess I’ve gotten used to it :).
2 thoughts on “Fast Sims Don’t Come From Fast Simulators”
Great post Neil!
It is often missed that by using SVUnit tests you don’t have DUT & VE simulator reloads between each test. Instead they are loaded once and each test go through the “setup – execute – verify – teardown” unit phases.
Having tests that execute quickly is a huge productivity gain. A semi-comprehensive post-commit regression that executes hundreds of tests in a couple of minutes is the norm in software development but is revolutionary in hardware development.
Another statistic – our cocotb functionality regression compiles the sample verilog and runs 25 tests in under a second!