So this is one of those posts where after a short conversation with a colleague, something jumps into my head and I end up asking myself I wonder if this makes sense? The idea has to do with formal verification, which is not my area of expertise, so I figured the best thing for me to do is just get it out. From there, real experts can discuss whether or not it makes sense (or maybe it’s something experts already do in which case I’m late to the party and would appreciate somebody straightening me out :)).
I’ve dedicated much of my time to TDD. In a hardware context, TDD is a new approach to dynamic simulation. Start with 1 test, ensure it fails, write the corresponding implementation, ensure the test passes, repeat until done. In diagram form…
TDD is catching on, slowly but surely, and I think it’s only a matter of time until it’s recognized as a viable alternative to the big bang write-all-the-rtl-then-test-it approach people generally use now. You all probably understand by now that I could go on about TDD for a while but don’t worry, that’s not why we’re here :). The question I’ll ask in this post is could something TDD-like be applied using formal verification?
I’ve never done any serious formal verification beyond a few tool demos and evaluations. I have used SVA though. I think I’m also aware of some of the issues behind the limitations of formal verification. Correct me if I’m wrong, but…
- practically speaking, formal tools are limited by the size of the solution space. If the solution space is huge, the tool will crunch and crunch and crunch on a design forever or give up without a result.
- assertion languages are horribly cryptic and very difficult to understand.
I’m making an assumption here, but because of it’s limitations, formal verification is not widely used (relative to dynamic simulation). In effect, it’s still waiting for it’s big break. To address the above, I believe formal verification is promoted more for proving block-level control logic to limit the solution space. As well, companies that sell formal tools include/enable assertion library plugins to make writing assertions easier. They also include a few automated checks that require no user intervention of any kind to catch some of the low hanging fruit.
That’s the end of what I think I know about formal. No… it’s not much. But…
My idea… which again, maybe someone has already thought of… is to suggest an approach around formal verification that is similar to how TDD works with dynamic simulation. Call it property-driven verification, if you will.
The development flow would be identical to what’s done in TDD except that the confirmation mechanism relies on properties and proofs instead of functional tests and dynamic simulation.
Instead of writing a test first, you write a property first. Instead of ensuring the test fails, you have your formal tool prove the property false. You still build you’re minimum supporting implementation, same as in TDD, then you feed it through the formal tool and the tool proves the property true. Repeat until done.
If you’re looking for a picture, here it is:
Why bother suggesting this idea?
If solution space is an issue for formal tools, this is a way to limit the solution space to the smallest size possible and then have it grow by the smallest size possible. Sure it may eventually become unmanageable but it won’t start that way and I’m guessing it’ll stay manageable for a very long time. Same goes for complexity of properties. You don’t start with complex properties, you start with extremely simple properties and add more simple properties in every cycle. Beyond that, I’d expect the same benefits as I’ve found with TDD and dynamic simulation (i.e. limiting over-engineered code, immediate/continuous feedback and higher quality code among others).
Like I said, I’m no formal expert so this is as far as I’ll take this.
If you’re reading this and you are a formal expert, why don’t you think about it a little more. Then try it and see what happens! I’d be interested to hear whether or not it makes formal verification any more accessible than it is now.
Disclaimer: property-driven development is already ‘a thing’ but not in the context of hardware verification and formal tools (if you google property-driven development you’ll get lots of references). I’m just using the term that seems to fit. If you take the time to try it out, you can call it whatever you want 😉
9 thoughts on “Property-Driven Development in Hardware”
I’m not a formal expert, although I have used formal tools.
I have been thinking of this for a while now and combined with a continuous integration / build server it could be a massive productivity increase. It is the hardware equivalent of having a unit test suite.
In formal there are three types of properties that can be described – some you want to happen, something you don’t want to happen and what is called fairness. A property describing something you don’t want to happen would initially pass and you hope never fails. Also a property does not necessarily relate to a single feature. Think of it this way. In a simulation you trace a path through the design and you want to make sure you get the right result. With a property you start from an arbitrary state and define how you move to other states, or states that you should not be able to reach. You then look for ways to prove or disprove that.
Brian, thanks for the comment! Seems what I’m proposing is a pro-active way of describing the “some you want to happen” concurrently with implementation of the design. I’d expect the (loose) correlation between property and feature would be similar to unit test and feature as in TDD though it’d take some practice to see if that’s actually true. Hopefully some one has the gumption to give it a shot.
thanks for reading!
Hi, Neil. Let me preface this comment by stating that I represent a formal verification company, Jasper Design Automation.
This is a very interesting discussion. It captures a trend we are seeing among many of our customers in which verification is driven back “upstream” into the design process. The nature of formal makes it extremely effective at catching bugs early, and for unit level testing it has proven very effective. A single property can go a long way towards helping a designer flush out bugs – or better yet, avoid inserting them in the first place.
Mike Bartley of TVS wrote a great vendor-agnostic paper about the pros and cons of formal deployment. I don’t see it on their website, but it’s a very good insight into how leading-edge formal users such as ARM are deploying Jasper.
One of the most interesting aspects of the paper is defining a “taxonomy” of formal deployment. What they propose is:
Bug Avoidance (designers using formal to eliminate bugs and reduce spec ambiguity)
Bug Hunting (systematic use of properties to uncover bugs at system/subsystem level)
Bug Absence (full proofs of critical behaviour at system/subsystem level)
Bug Analysis (isolating and verifying foxes for late-phase corner-case bugs)
The bug avoidance deployment model is very close to what you describe, and has a very rapid RoI.
And, yes, we would assert (ahem) that Jasper’s technology ameliorates many of the historic drawbacks of formal methods.
I think this is the kind of approach that many with experience in formal property checking would advocate as a very good way to carry out unit level verification. Whether the implementation or the property exists first in the ‘PDD’ cycle probably doesn’t matter too much, as long as the cycles are tight.
As Brian says properties do not necessarily relate to features. One approach is to have a single property describe a single transaction (e.g. the movement of data from input to output of the design – along with any transformations on the data, and adherence to protocols of course). Since the development of the implementation is likely to move from simple transactions to more complex transactions (and add various sideband outputs in along the way), so would the development of the properties.
As well as a very robust unit, you would also end up with
– a set of assertions reflecting any assumptions made in the design (e.g. on input behaviour)
– and potentially a set of functional coverage points reflecting interesting interactions between the unit and surrounding units
to feed up into your top-level verification.
So I think it’s an idea with plenty of potential.
Neil, I think the breakthrough of PDD will come with the broad adoption of formal verification by designers. Once they start to replace directed smoke-testing with this unit testing approach, we’ll get there! I try to change the mindset for 2 years with limited success, but keep moving.
I also count on BDD to tackle it from the requirements/feature side, but guess this will need 2-3 more years to develop.
Neil, TDD (using simulation or Formal) is just unit-testing implemented different ways.
I use Jasper, and I’ve used it exactly as you describe: code assertions, then write RTL.
I found it brings the same values as writing your unit-tests before you write RTL: it forces you to think about what the Requirements are, before you worry about the implementation.
Something which I’m working on right now: is a way to capture setup/expected result conditions as part of Requirements docs, and then detect setup conditions met and expected results pass/fail. Because in very large test-suites, as requirements are added or changed, often an existing test-suite actually creates all the setup conditions, but just doesn’t verify an expected output. You can save a huge amount of effort in not writing new tests (or finding one that meets 3 of 4 setup conditions, and then extending from it).
The expense of formal verificaton tools prevents us from turning them over to designers to use in a TDD flow, but adding properties at design time that can be formally verified at a later time is very valuable. We are currently using formal to verify the designers’ own assertions and finding several bugs in the process.
Here’s how this might look in a TDD flow:
INSIDE the loop:
1. Write a unit test AND a property for the new feature
2. Verify in simulation that both the unit test and the property fail
3. Implement new code code
4. Verify in simulation that both the unit these and the property pass
OUTSIDE the loop:
Run formal verification at the unit level once the tool is available
Here is a link to a paper I wrote recently that is much related to this topic, in case you want to have a look.
Hope you enjoy it.