Verification That Flows

Writing about portable stimulus has created some neat opportunities for me to swap ideas with others regarding how it fits into our verification paradigm. I like that. From a theoretical standpoint, I think I’ve heard enough to confirm my thought that it’s sweet spot is verifying feature sets with complex scenarios.

Trickier is to move beyond the theory toward recommendations for how it all fits together. By ‘all’ I mean more than just portable stimulus. I mean how all our verification techniques fit together in a complementary way as part of a start to finish verification flow. This is a first attempt at qualifying what ‘all’ looks like. I’ll go through each verification technique, comment on it’s purpose, how teams transition to/from it and discuss how it feeds subsequent verification activity.

First, I have a couple notes on complexity. I’m using complexity to describe the overall state space size and its likelihood to change. I use low, moderate and high.

  • Low complexity – state space is easily defined and well contained. Number of states measured in the 10’s or 100’s.
  • Moderate complexity – state space is large but still relatively easy to define; likely to evolve over time but not substantially. Number of states measured in the 1000’s to 10,000’s.
  • High complexity – very large state space; likely to evolve substantially over time. Number of states measured in the >1,000,000’s.

In my mind, low complexity and very high complexity designs are fairly easy to characterize from the start. The boundary between moderate and high complexity is where there’s the most ambiguity. Designs in that range require ongoing scrutiny to ensure an optimal approach.

Relative to complexity, I suggest whether a technique is suitable for covering state space exhaustively or within reason. ‘Exhaustively’ means everything, no surprise there. ‘Within a reasonable range of test stimulus’ means some technique will get you a good start but may not be appropriate for going the distance because it’s effectiveness tapers off.

Unit Testing

To an overwhelming extent, the line-by-line quality of a design and its test infrastructure determines how quickly – or slowly – a team works through the state space on its way to release. Unit testing is best for verifying the lowest level details because it gives the finest level of control. Individual modules or components are the target for unit testing with the focus of each test being – as a general guideline – a few lines of code at maximum. It is a test heavy technique requiring minimal infrastructure. Low-level code quality is the primary focus so that individual units can support more complex activity. Porting infrastructure to other environments may be done opportunistically but reuse of unit test infrastructure is not a specific goal. Unit testing is not recommended beyond sanitizing basic interactions.


  • Isolate and verify low-level design or testbench intent

Typical Bugs Eliminated

  • Obvious typos and omissions
  • Invalid/incomplete expressions
  • Bad variable assignments
  • Wrong/missing state transitions
  • Invalid arithmetic calculations
  • Type sizing/mismatches

Infrastructure Carried forward

  • None

Directed Testing

Directed testing is widely applicable but as a primary test technique it is best applied when testing combinations of transitions and simple transaction level interactions. The primary scope of directed tests is modules to subsystems, covering complex functions and simple features. Directed testing likely extends to integration and interoperability scenarios as a complement to higher level test techniques. Directed testing is also a test heavy technique though infrastructure should be developed with portability in mind. Part of the directed testing deliverable should be a transaction-based API that can be leveraged with constrained random and portable stimulus. Directed testing should also involve the development of transaction level checking and monitors.


  • Exhaustively verify low complexity features
  • Verify moderate complexity features within a reasonable range of test stimulus

Typical Bugs Eliminated

  • Configuration issues
  • Data and control path discontinuities
  • Basic protocol violations
  • Implementation specific lockups, overflows and underflows

Infrastructure Carried Forward

  • Configuration utilities
  • Transaction
  • Transaction-level driver and monitor
  • DUT model and scoreboard
  • Protocol checkers

Constrained Random

Application of constrained random tests represents the tipping point between verifying individual features and verifying entire feature sets. Design and testbench quality must be high such that they can support complete areas of the state space and meaningful coverage runs (i.e. a team has high confidence that unit tests and directed tests have adequately verified all expected behaviour such that exhaustive coverage of feature sets is a practical goal). The focus of constrained random is primarily subsystems. Constrained random tests depend on the transaction-based stimulus API and self-checking developed as part of the directed test infrastructure. Constrained random tests may be used for sanitizing integrated subsystems but limited to low variation scenarios due to inefficiencies in randomized stimulus. Productivity of constrained random tests absolutely depends on low-level code quality, thus it is not recommended for low level design details.


  • Exhaustively verify moderate complexity features and integrated feature sets
  • Verify high complexity features and integrated feature sets within a reasonable range of test stimulus

Typical Bugs Eliminated

  • Incomplete handling of randomized configuration
  • Incomplete handling of randomized data and control path
  • Architecture related lockups, overflows and underflows
  • Unexpected interdependencies between concurrent paths/threads

Infrastructure Carried Forward

  • Transaction library
  • Configuration library

Portable Stimulus

Portable stimulus in simulation is best suited to covering high complexity features and integrated feature sets in a way that is more efficient than directed or constrained random testing. The focus of portable stimulus in simulation is large/complex subsystems or SoC level. As is the case with constrained random, portable stimulus depends on the transaction-based stimulus API and self-checking developed as part of the directed test infrastructure. It may or may not depend on simple scenarios developed in constrained random testing.

Ideally, portable stimulus in simulation continues until performance restrictions are apparent at which time tests may be retargeted to an emulator or some other high performance platform. Retargeting obviously requires a reimplementation of the stimulus API and self-checking.


  • Exhaustively verify complex end-to-end subsystem features
  • Exhaustively verify entire feature sets and interaction between features

Typical Bugs Eliminated

  • Same as constrained random

Infrastructure Carried Forward

  • Scenario library
  • Retargeted transaction API

Integrated HW/SW Testing

Integrated HW/SW testing focuses on verifying an SoC supports user facing use cases under normal operation by running real-life use cases. Focus of a use case covers both hardware and software. Integrated HW/SW testing may happen in simulation, emulation, FPGA platform or some other appropriate platform. There should be 100% confidence in associated hardware functionality before integrated testing starts. Similarly, software and devices drivers developed off target should also be high quality.


  • Exhaustively verify Soc level use cases

Typical Bugs Eliminated

  • Hardware/software incompatibilities
  • Performance limitations
  • Usage model defects

Infrastructure Carried Forward

  • None

Tie It Together

That’s a lot to remember so I figured a table to summarize each technique would be a handy reference. I’d be interested to hear suggestions for how we can make this table more useful, especially looking at the typical bugs. Personally, I think typical bugs are a great indicator for whether or not we’re missing or misusing a particular technique.

As always, I’d be happy to hear your thoughts for improvement!


One thought on “Verification That Flows

  1. Hi Neil,
    You provide a great overview of verification. At some point in the future, it would be great to have a similar article on the role of test throughout the life cycle and how these relate to each other; verification, validation, evaluation. The latter is interesting because of the recent IEEE document Ethically Aligned Design Version II which recommends ongoing evaluation of A/IS particularly for any ethical transgressions after release into the market. To do this well clearly implies that all test data and artifacts be retained for transparency reasons. The people skill to modify the tests will also have to be retained somehow in order to do so efficiently and competently. With use cases multiplying rapidly, the test problem is simply getting bigger and more complex. We will not only need better tools, but better methodologies as well to adjust to all levels of abstraction. Just a thought!

Leave a Reply to Claude Cloutier Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.