TDD: Verification with SVUnit

Introduction

In my last posting, I gave a brief overivew of the SVUnit framework, and it’s usage model. This post will look at how to use this framework in your workflow: how it can be integrated into your existing verification methodology to help develop new verification components, and extend existing environments.

At this point, I’ll state that the key determinant for a successful introduction is attitude. The developers must see merit in using this approach, or at least trying the approach with an open mind. Otherwise, it will very likely not be successful.

Adding TDD to your New Project / Verification Component

Neil has discussed in previous posts about the value of adding the TDD methodology, and has shown that as an activity in a new project it is straightforward. The scripts in this framework make adding TDD even easier value by automatically generating code that is immediately usable. The following provides some recommendations on a workflow that we’ve found useful:

  • Create the class header files with only the class definition (i.e., no implementation). Don’t worry about getting the class perfect, the intent here is to use the scripts to generate the scaffolding code required for the unit test framework.
  • Use the provided scripts to generate your unit test case files and corresponding test suites.
  • For each function and task in the system create a unit test case (derived from the svunit_testcase class). The focus while writing these unit tests is to create the usage model for your new verification component, or how each task/function is to be used in your environment.  This should be done iteratively, one task/function at a time building from a logical start e.g., following the UVM phase flow, or from the public tasks/functions and following their call flow.
  • Add just enough code into task/function to get the test case to pass, debugging each task/function individually to ensure that they are correct as per the usual TDD “way”.
  • Aggregate the testcases into meaningful test suites. Create these test suites using the scripts provided.
  • Similarly, aggregate the test suites into the test runner derived from the svunit_testrunner (using the scripts provided).

At this point, you have a couple of very handy things:

  • a working, debugged initial version of your class;
  • a example usage model for your new users of that class – this usage model is found in the unit tests that your colleagues may look at;
  • test code that can be used to ensure that when you add or change functionality (using TDD) in the class that the code continues to function as your users expect; and test code that can be periodically run to ensure that if any updates to classes that your code is dependent on continues to work as expected i.e. you’ll see when someone else affects your code.
From comments I’ve received, some of you are already doing TDD with your own frameworks.  Please feel free to add to this conversation on your best practices.

Using SVUnit for Existing Classes

When using the SVUnit framework on an existing library of verification components you can either adopt an incremental or full approach. The incremental approach would use the scripts to generate the initial unit tests (all of which would be blank) and then add the unit tests individually as required (e.g., when the corresponding task/function is being updated). The full approach would be to generate and define the unit tests for all tasks/functions immediately.

Which Classes to Pick

Ideally, TDD should be used for every class in your verification component. You will likely come to this conclusion after you’ve used TDD for a while, but adopting TDD strategically at first is still of great value. That is, cherry-picking the most complicated classes, or base classes that are used across multiple environments or projects are clearly the classes you need to pick first.

There are some classes that provide some minimal complication. For example, drivers/BFMs need an SystemVerilog interface or RTL to be able to drive to and respond. When the RTL that you’re driver is talking with is not available, using TDD on a driver is doable by adding some small tasks in your interface, or creating a small piece of RTL to replicate a source/sink for your driver. However, if the RTL is ready to be used, creating a driver using the TDD methodology does allow you to create a very simple mechanism to drive stimulus into your RTL. This is a very low-overhead, test harness that you can give to your RTL designer to use while you build out the rest of your environment.

When to Run Unit Tests

The last key element is to identify in your process when it would be best to run the unit tests. Ideally, the tests should be run by every developer on a regular basis to determine whether they have introduced any issues. The possibilities on where the test runner script can be added into your process are as follows (in preferential order):

  • The best approach is to run the entire test suite prior to committing code to the source code management. Any issues must be dealt with prior to check in. Generally, the TDD tests for verification components are usually very quick (usually less than couple of minutes with most of that time taken up by the compilation/elaboration as tests do not use any RTL code).
  • Another good approach is to add it to your commit sequence for your source code control system. That is, the check-in for any code is allowed to proceed ONLY if the test runner script passes. This assumes that you have a wrapper script around your source code “commit” or “check-in” command.
  • If you use a “sanitizer” script for your source code i.e., where a code snapshot is qualified as being “good” (or in the agile parlance a continuous integration flow). This script usually requires a sanity regression to be run, if the regression passes then all the code revisions that make up the snapshot are labeled as “sane”. Adding the test runner into this script prior to running the sanity regression would be another possibility.
  • Alternatively, the test runner script can be run manually prior to starting a regression. The decision to use the pass/fail result from the test runner script as either a “warning” (allowing the regression to proceed but the cause for the failure to be investigated) or as a “fatal” (where the regression is not allowed to proceed until the test failure is (fixed) is up to the project’s regression management team. (I’d vote for the fatal, but that’s just me :-).
  • Lastly, the test runner can be run using a cron job on a regular schedule to take the latest “valid” snapshot of the source code and determine any failures. The key downside of this approach is that then someone has to isolate whose update caused the issue.  However, this should not generally be an issue since the granularity of the tests is so small.

My next post will look at two things: how TDD can be used by RTL designers, and some future directions for the TDD framework. From the comments received to date, there are some folks out there already doing TDD, and are already discovering its value. We as a group of professionals need to evolve TDD to work effectively in a design and verification flow. Give it a try and join the discussion!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.