As we enter the age of standardized portable stimulus through tool support for PSS 1.0, verification teams will undoubtedly feel the pressure to move into this latest, greatest verification technology. I can certainly feel it. And being new to the tooling along with the crescendo in publicity, I’ve been increasingly curious for more information. I assume I’m not the only one.
As such, it feels like the right time to consider a few obvious entry points and ponder the cost and value of jumping in. Given the possibilities, there’s no doubt how teams invest in portable stimulus and what they get in return will vary substantially.
Platform Independence
If there’s a flagship motivation for adopting portable stimulus, it’s to enable platform independence. Platform independence gives teams the option of developing abstract, target agnostic models and test suites which can be applied to multiple platforms. Build a simulation platform, an emulation platform, FPGA platform and/or any other miscellaneous platform and each uses portable stimulus as the front-end for running tests and measuring results.
The value of platform independence is tied directly to integrated hardware/software testing-emphasis on software-with obvious intended performance benefits. But as important as performance is for software testing, I hesitate to rate platform independence as high value because I’m yet to understand the benefits of running on platforms interchangeably. Seems good in theory, but I can’t see teams having multiple high performance (i.e. non-simulation) platforms. Perhaps teams would target a primary high performance platform while being backward compatible to a debug platform (simulation)? Or…? And it’s also hard for me to imagine a test suite that’s entirely portable to both a high performance and simulation platform given the difficulty of designing them as functionally compatible. But just because it doesn’t measure up to the ideal doesn’t mean there’s no value in platform independence, it just means value will depend heavily on context.
From a feasibility point of view, I see platform independence as a tall order. Organizationally, it will take a highly cohesive hardware/software effort, a tough ask from people who are typically well insulated from each other. Likewise with pre-silicon and post-silicon validation teams.
Technically, there’s the critical requirement of a well conceived API. The API must adequately represent each target platform while being loosely coupled. Loose coupling will require carefully designed trade-offs that prevent the extraction and imposition of baggage from one platform to another. A tricky part of managing these trade-offs is that they’re likely to extend to 3rd party IP and test equipment providers.
Setting aside the difficulties, for large SoCs portable stimulus could be a difficult-but-worth-it proposition. In other words, the incremental value of success realized by closing the gap between simulation and a high performance test platform may prove to be invaluable.
IP Portability (Next Technology)
IP portability is another prominent starting point for portable stimulus because it re-emphasizes the enablement of hierarchical reuse between the subsystem and SoC levels. Portable stimulus offers teams a more abstract way to capture stimulus and intent that, ideally, is easier to map vertically into SoC functionality or horizontally into various subsystem configurations. It involves migrating chunks of stimulus, modeling, checking and coverage away from Systemverilog to a more abstract form using the PSS.
I subtitled IP portability as the next technology model because I see it as an evolutionary change for semiconductor teams given that IP portability has been a steady theme for more than a decade. Teams will adopt portable stimulus similar to how it happened with UVM: apply it opportunistically in green fields development, from there it proliferates through a combination of peer pressure and propaganda (if you’re thinking “that’s how all next technology is adopted”, compare the adoption rate of UVM with that of formal verification; usage of formal has grown much more modestly). I expect curiosity in portable stimulus to steadily build within verification circles and the IP portability/next technology adoption model to resonate well, both ideologically and technically. As with UVM, this would be the path that leads to portable stimulus becoming an integral part of how we verify devices.
A more fundamental question than feasibility, however, is whether or not there’s big value in a next technology adoption that takes us beyond Systemverilog. Thus far, my guess is ‘probably not’. I’m old enough to have seen investment in IP portability (i.e. reuse) easily overwhelmed by the overhead and bureaucracy imposed to enable it. Not always, but often. Good intentions over-engineered into VIP with little or no return on the investment. It’s time accept the fact that reuse is not inherently valuable on its own; that it’s not directly proportional to value; that it doesn’t automatically guarantee efficiency, much less productivity. To be blunt, we continue to assume major benefits from reuse while its only guaranteed characteristic is that of diminishing returns. That needs to change.
Practically speaking, portable stimulus does give us an opportunity to overhaul our vision of reuse. I’m skeptical it’ll come out looking any different, but there is a chance. On that note, presently, I’d steer people away from looking at portable stimulus as next technology. Not because I think it’s not ready or because I’m a technology lagard, but because its value as next technology is far from clear.
Coverage Acceleration
Accelerated coverage closure is an adoption model that complements current verification technology, namely UVM and constrained random. It focuses on replacing and optimizing complex stimulus generation while relying on current current practices for for modeling, checking and coverage infrastructure.
Coverage acceleration is a practical, happy medium approach. I see it as higher value than IP portability because it’s motivated by a more tangible outcome. It requires a much smaller investment than both platform independence and IP portability. Finally, it carries much less risk given the narrower scope. And unique to the coverage acceleration model is that I don’t see it requiring green fields development nor would it impose a heavy architectural burden when used to upgrade a legacy testbench.
To that last point, if I were to use portable stimulus right now, my gut tells me to use it for coverage acceleration with an existing VIP. Given that it essentially involves refactoring stimulus generation-simplify and generalize existing scenarios by porting the complexity toward a more suitable representation-it feels like the right way to get your feet wet.
There it is… something to chew on for those considering portable stimulus just as we start to see tool support for PSS 1.0. From a value perspective, portable stimulus for coverage acceleration feels like the best bang for the buck when it comes to choosing an entry point. It also seems to have the greatest amount of flexibility as far as scope, the type of device being targeted and where that device is in its life cycle.
-neil
Hi Neil
Great explanation and I think you are right on. Your point exactly matches Breker’s experience. It is true that IP reuse and platform independence are worthy goals and something that has significant, long-term benefits. But what we see is that the ability to increase both SoC and UVM coverage are the reasons why teams initially adopt PSS, and it is this area where we have put a significant level of focus. We do have customers who value your first two points, but only after individual engineers have seen success on their specific project phases. Fortunately PSS does have a lot to offer, although the tools must provide a complete flow to make its adoption easy. This is the next frontier. Thank you for this blog
Dave
C/C++ is the most portable stimulus.
Here is a view of C/C++ portability…
Anything that will eventually be sequenced by FW, should be sequenced by portable C/C++ at any level of simulation (i.e. co-simulation). That is, register sequencing FW APIs into the HW blocks should be verified using C/C++ for the sequences, ideally the same C/C++ “driver” for the block/subsystem that will run on the production chip. C/C++ is portable, it can run the x86 alongside simulators (via DPI), it can run on processor models (ARM, RISC-V,…) in simulators, FPGAs, emulators. It can run on x86 driving emulator transactors, and it can run on x86 to sequence PCIe hosts and other interfaces.
Rather than define RTL/FW transactions in terms of registers, the conversation should move to FW APIs and the RTL + C/C++ API should be co-verified. An open source RISC-V can be instantiated in every block level environment to drive the FW sequencing, or some fancy DPI to UVM AXI driver magic can also be used.
FW and verification take the most resources in developing an SoC. Getting them working in concert together correctly, earlier can have huge benefits for the overall project.