Here’s the scene: you’re a hardware engineer at a conference sitting in on a talk about functional coverage. You’re there because you think functional coverage is important. You think you do a good job of building functional coverage groups but the title of the talk suggests otherwise. The speaker takes the stage…
Hi everyone. It’s good to see you all here.
Before I get started I just want to take a quick poll. By show of hands, how many of you are using constrained random verification? Right.. quite a few people… that’s what I figured.
Ok… keep your hand up if you use functional coverage to measure whether or not you’re finished your verification effort.
Alright… I see a few dropping but most of you have still got your hands up… that’s a good sign since we all know results from constrained random tests aren’t overly useful without functional coverage.
Ok… next… does anyone here test their coverage groups? I mean does anyone take the time to verify that their coverage groups, the actual code, is correct? Anyone? I see a lot of hands dropping… not a good sign.
Last… from the people that just dropped their hand I’ve got a question: should you trust your functional coverage model, probably the most important code you write, if you haven’t gone to the trouble of making sure it’s correct?
Hopefully not.
A fully populated functional coverage model has become a pretty important component of determining whether you’ve sufficiently traveled the design state space. It tells you that you’ve done all that needs to be done by observing all that needs to be observed. The coverage model is the benchmark by which DONE is measured, which is the way it should be… unless you’re coverage model is wrong (i.e. it’s a defect ridden pile of unverified code).
Luckily, writing unit tests with SVUnit is a great way to verify you’re coverage model is correct. Here’s how.
To start, testing your coverage model depends on an “obscure” built-in method for cover points and cover groups (I say “obscure” not because it isn’t important, but because I doubt many people actually use it). The get_inst_coverage() method is what we’re after; what it does is return a coverage percentage from a particular cover group or coverpoint. For example, if you have a coverpoint with 2 bins and you sample 1, the get_inst_coverage() returns 50. Sample both and you get 100. Sample neither and you get 0.
Here’s three unit tests that illustrate what I’m talking about. We have a coverpoint called addr_min_cp and it’s observing two addresses: ‘0’ and ‘4’. The first test confirms that when we sample addr_min_cp with ‘0’ we’ve hit 50%. The second test confirms that ‘4’ hits the other 50%. The 3rd test, where we combine ‘0’ and ‘4’, verifies addr_min_cp is correct.
Easy peasy.
With add_min_cp observing interaction with the low end of the address space, we can also observe interaction with the upper end of the address space. Here’s three other tests we can use to verify that our addr_max_cp is behaving as we expect.
Now how about a few equally spaced bins between min and max, as is often done by verification engineers to see that we’ve plucked an acceptable number of data points from throughout the address space. Here’s how we’d loop through 16 data points, verifying the coverage score along the way.
Important to point out from that snippet of code is that the get_inst_coverage() returns a real equal to ‘N * 1/16’, which means we need to expect a real or we find ourselves with a failing unit test.
Now the coverage points in those examples are pretty basic right? Who hasn’t implemented the pattern of MIN, MAX and a few in between? Anybody? And if we look at the code, there aren’t too many ways to screw that up…
…and yet there are ways to screw up just about anything, isn’t there, even basic coverpoints. Let’s say you define `ADDR_MAX in some other file as follows:
Nothing wrong with that, until somebody changes the definition for a test that’s being written and mistakenly changes that line to this without understanding the implications:
Now, as I suggest in the title, your coverage model is wrong.
What happens from here? Well, if you have unit tests changing that line wouldn’t be a big deal because two failing unit tests would flag a problem and the person that made the change could go back and fix it. Without unit tests, however, the chances that this defect gets shipped to a customer – via a product with an inadequately exercised address space – is quite high. And what happens when your customer comes to you and says they can’t reach the upper end of the address space?
Oh… sorry. It looks like we had a bug in our coverage model so we didn’t see that. But don’t worry, it was an easy one. It’s fixed now.
Sure it’s easy to fix, though how many times can you get away with “oh… sorry”? That depends on your customer and your track record. A couple times might not be bad; several times will be embarrassing; one too many can be catastrophic.
For as simple as it is to test your coverage groups with SVUnit, and for as critical as they are to determining progress and design coverage, I’d recommend erring on the side of caution.
-neil
NOTE: As of writing this, the build-in get_inst_coverage() method looks like it’s only be properly supported in Questa. I’m using version 10.1c_1. As of VCS version G-2012.09 and Incisive version 12.10-s008, get_inst_coverage() was not supported by Synopsys or Cadence. If you’re using newer versions, you’ll want to see for yourself if anything has changed. If you see a version of either that works, please let me know and I’ll post an update.
UPDATE: You’ll see in the comments that Cadence also supports the get_inst_coverage() method but you need to use the ‘-coverage <string>’ option on the command line (which I didn’t originally have). That’s confirmed for versions 12.10-s007 and 12.10-s008.
UPDATE: Count Synopsys in as well now as of at least vcs version G-2012.09. I had an AE help me out (he easily saw what I was forgetting). With the “option.per_instance = 1” setting in your covergroup, the get_inst_coverage() returns a valid result. Without, you’ll see that it returns 0 in all cases. All of Synopsys, Mentor and Cadence support unit testing covergroups!
Interesting thought. I usually try to verify my covergroups with some directed code or even a micro test case, but a formal methodology of verifying them may be the real way to go.
Looks like most of LRM is supported in at least ncsim 12.10-s007. Here’s a trivial test case to show it.
ncsim> run
aaa_cp 20.000000
aaa_X_bbb 0.000000 0;
aaa 0;
aaa < 5;
}
covergroup ab_cg;
option.per_instance =1;
aaa_cp: coverpoint aaa {
bins aaa_bins [5] = {[1:5]};
}
bbb_cp: coverpoint bbb {
bins bbb_bins [5] = {[1:5]};
}
aaa_X_bbb : cross aaa_cp, bbb_cp;
endgroup
function new();
ab_cg = new();
endfunction
endclass
doit idoit = new();
initial begin
//idoit.randomize() with {aaa == 3;}; // Let's just go directed
idoit.aaa = 1;
idoit.ab_cg.sample();
$display("aaa_cp %f", idoit.ab_cg.aaa_cp.get_inst_coverage() );
$display("aaa_X_bbb %f", idoit.ab_cg.aaa_X_bbb.get_inst_coverage() );
idoit.bbb = 1;
idoit.ab_cg.sample();
$display("bbb_cp %f", idoit.ab_cg.bbb_cp.get_inst_coverage() );
idoit.bbb = 2;
idoit.ab_cg.sample();
$display("bbb_cp %f", idoit.ab_cg.bbb_cp.get_inst_coverage() );
$display("aaa_X_bbb %f", idoit.ab_cg.aaa_X_bbb.get_inst_coverage() );
end
endmodule
Todd, thanks for the example! after a little poking around, I find that the I need the ‘-coverage‘ option on the command line. Without that, no stats are stored. With it, I now see the get_inst_coverage() working as I’d hoped. My mistake there I guess. Nice to add Incisive to the list of build-your-coverage-model-with-tdd enabling simulators!
-neil
In my haste, I typo’ed this
constraint bbb_c1 {
bbb > 0;
bbb <= 5;
}