A few times over the last while people have suggested I add a mocking framework to SVUnit. Took me a while, but I finally got around to it. SVMock version 0.1 is now available on GitHub. I’m actively working on it but the base functionality is there and ready for use.
If you’re new to mocking, in the context of unit testing it’s a technique for replacing dependancies of some unit-under-test to make it easier to test. Mocks increase isolation so you can focus on development without the hassle of including large chunks of code, infrastructure or IP you don’t necessarily care about. Mocks inject new sample points and assertions that make it easier to capture interactions with surrounding code. They also offer increased control by substituting potentially complex dependancy usage models for direct control over interactions. In short, mocking helps you focus on what you care about and ignore the stuff you don’t.
Now to SVMock…
SVMock is a lightweight mocking framework for use with SVUnit. It’s modelled after GoogleMock, a framework software developers use to develop C/C++ code in GoogleTest. I’ve used GoogleMock with GoogleTest in the past and found it quite useful which is why I’m using it as a reference. Considering Systemverilog borrows a lot of language structures from C++, it seems like a natural fit compared to others I’m familiar with.
This initial release of SVMock supports the following features:
- function and task mocking within a Systemverilog class
- user specified expectations for the number of times a class method is called (or not called)
- user specified expectations for argument values when a method is called
- overriding function return values
- remapping class methods to other user defined methods
- method arguments of any type
To illustrate these features, I think it’s best to jump into a code example that shows everything in detail. For reference, this is an example that’s included in the release package so feel free to download it and follow along (I’ll point you to it at the end of this post).
In the example, we’re testing class bedrock. You can see that class bedrock depends on an instance of flintstones, specifically the methods dino, bam_bam and pebbles. Since our immediate focus is bedrock, we’re going to use a mock to test the interactions with flintstones without understanding the behaviour of flintstones.
This is what flintstones looks like. For now you’ll have to pretend the comments represent large swaths of complicated code that’s yet to be written!
To build a mock of flintstones, which we’ll call mock_flintstones, we’re interested in taking direct control of dino, pebbles and bam_bam so we can capture how those methods are being used in bedrock and override functionality if necessary. To do that, we’ll define new methods for each in a new class derived from flintstones.
The SVMOCK/SVMOCK_END encapsulate the secret sauce required under the hood to tie things together within the mock class. Then there’s a macro for each method we want to replace (SVMOCK_VFUNC<N> is used for void functions with <N> input args and SVMOCK_FUNC<N> for functions with a non-void return with <N> input args. Not shown here is SVMOCK_TASK<N> for tasks with <N> input args). For the methods we’re replacing, you’ll see the inputs to the macro line up with the name and type of each input argument.
Assuming you’re familiar with SVUnit, the next step is to create a new unit test template for bedrock so we can begin testing. That happens as you normally do using create_unit_test.pl. A difference is that in the template we need to plumb in the mock in place of the real flintstones.
That’s it for creating and connecting a mock. To give you a feel for the checking you get with the mock_flintstones, we can use EXPECT_CALL to set expectations for how many times a method is called. For example, we can make sure dino() is called exactly once when yabba_dabba_do is called.
We can also use an EXPECT_CALL to verify a method is called with the right arguments. So if it’s important that the input arguments to pebbles are derived from the input to yabba_dabba_do, as they are in bedrock, we can use with_args to verify that’s the case.
The EXPECT_CALL is an example of how we can use the mock for checking without changing any behaviour. Another option is to use ON_CALL to override the behaviour of flintstones. In the simplest case, we could use an ON_CALL returns to set the return value of pebbles to a known value (i.e. 99). We can then use that known value in an EXPECT_CALL with_args for bam_bam.
That’s the basics for setting expectations and injecting behaviour using the mock. Now my favorite, favourite, feature of SVMock: override the behaviour of a method so that it does something totally different than the original. We can do that by creating a method mapping in the mock, then choosing the new method as the default behaviour.
Example, let’s add a new method in our mock_flintstones called mr_slate and map it to dino.
Then in a test where mr_slate gives us the extra functionality not available in dino, we can use ON_CALL will_by_default to select mr_slate in place of dino.
I can put anything I want in mr_slate. So in effect I can add test behaviour in place of dino that does whatever I want. Like I said, this is my favourite feature of SVMock. I think the ability to easily inject test behaviour is really powerful.
So in a nutshell, we can define an SVMock to add checking with EXPECT_CALL or inject behaviour using ON_CALL. There’s more options to both but this should be enough to get a feel for what you can do with the framework. And I did skip a few steps here but all the gory details are available for people that want to get started with SVMock.
For a more thorough explanation of how to use SVMock you can check out the user guide on GitHub (i.e. the README.md). That’s a detailed step by step instruction. And this bedrock example is packaged with SVMock in examples/class/bedrock. Download SVMock, make sure you have SVUnit installed, then go to bedrock and run it, tinker a bit, get a feel for how it works, then give it a shot with some of your code.
As always, it’s great to hear opinions. You can get me at neil.johnson@agilesoc.com. Any feedback is good feedback so whatever your opinion is I’ll be happy to hear it. I’ll be writing more about it in the coming weeks and adding features as I go (the first request has been to expand the logging, that’s currently top of my todo list).
Happy mocking!
-neil
This is… simply amazing…
So much functionality, in such a compact form.
What do you think of linking in UCDB, so that the `EXPECT_CALL could indicate that a particular requirement (or list of requirements) had been verified (or failed)? Per-instance checking might be required, because the test-case knows the setup conditions, and a covergroup or assertion might not have all available information.
thx erik. “simply amazing” is a bit strong but I appreciate the endorsement nonetheless ;). I think there is quite a lot crammed into those macros and I think I managed to keep the usage model relatively clean.
what you’re suggesting could probably be done. if you wanted to file a github issue I’d take a look at that at some point. after we get a few users give this basic version a nod, I expect it’ll be natural to move on to more advanced features like this.
You may want to take a look at the CoverageLens open source tool for querying a coverage database: https://www.amiq.com/consulting/blog/?tag=CoverageLens
Hey Neil,
I like this, and will use it.