My RTL is Done

Kind of.

I regularly hear that part of why designers don’t have time for unit testing RTL is because they’re under extreme pressure to deliver RTL to PD. I have very little experience in this direction but I think it’s so PD can get on with floorplanning and <whatever it is they do>. The thing about the early RTL drops is that they almost always happen before any meaningful verification is done. They tend to be very buggy, sometimes borderline non-functional, but they must be useful otherwise the pressure wouldn’t exist.

While I don’t totally understand the reasoning, I do understand the pressure. When a design engineer says there’s no time for unit testing, I shrug my shoulders and sympathize as best I can.

Then I get thinking… what if there’s a way we could relieve the pressure by giving designers a way to deliver buggy, non-functional RTL faster than ever to PD? With the added breathing room, they could add a few unit tests as they code.

This is where Poser comes in.

Poser has been in my mind a while but I haven’t had the time to build it until recently. Using a verilog module declaration and size/flop count estimates, designers can use Poser to generate verilog modules for PD to use in place of real RTL. The idea is the poser modules would be enough for early PD. Then as real RTL modules are built and tested, the poser modules would be removed and replaced by the real thing. There’s an advantage for PD in that an entire poser design could be done just minutes after the skeleton architecture, module IO and connectivity are complete. As I said, there’s also the advantage for RTL designers in that the initial RTL drops are taken care of with much less effort.

That’s the theory. But frankly, I don’t know if this is a good idea or a completely ridiculous idea. With my almost non-existent design/PD experience I figure the only way to find out is to build a tool that does it, then wait and see what people that know what they’re doing decide for me ;).

Poser is real as of about an hour ago. You can find it on github at: https://github.com/nosnhojn/poser (the README is pasted in below). Though it’s basic and likely inadequate, it does work. Put a module in with size and flop estimates and you’ll get a poser module out. If you’ve got a few minutes, please try it and/or forward this post on to anyone else you think may be interested.

More than anything I’ve done in the past, I will need some feedback to decide whether I carry on with this or can it. If you have an opinion – any opinion – please leave it in the comments. Thanks!

-neil

Screen Shot 2016-05-05 at 3.47.19 PM

5 thoughts on “My RTL is Done

  1. It actually is a very good idea, and people do it all the time. If you’re not sure of gate-counts, just plop down a set of 32-bit adders, plus the memories you know you’ll need (so you can get some floorplanning congestion going on), then some shift-registers so that all inputs go to all outputs so your synthesis scripts don’t throw things away.

    Now if good RTL partitioning is done (data-path separate from control-path), the data-path is the vast majority of the design (and you know what it’s going to be from day 1), it’s the control-path (which is small but highly complex) which is where all the verification is done.

    This plays well with unit-testing, because one can unit-test (or use Formal) on the control-path FSMs, while floor-planning and power folks have a netlist that’s about 95% the right size and with all the expected components.

  2. Random logic will be pretty useless for any meaningful PD results, even if the std cell area is the same.
    And any hierarchy/name changes for the memory instances will always cause problems in the PD flow.
    Same goes for other parts of the PD flow e.g clock trees, power mesh, DFT, timing constraints….

    The status quo of ramming something out that’s 90% there on area but borderline non-functional is better for PD progress.
    There’s a stack more PD work being gated by having this available.
    And if I’m running the entire project, I’d sacrifice some unit testing at this point of the project for 90% there area + memory rtl.
    Once the PD flow is up, small incremental RTL changes (aka bug fixes) are relatively cheap.

    1. thx for the comment luke! you see there being any happy medium between incremental bug fixes being cheap to integrate in pd while the same bug fixes are expensive for functional verification? are there parts of a design that could benefit from early/random rtl? and/or other ideas that get pd off and running without needing the 90% rtl?

      1. Lack of repeat-ability is my biggest grip in PD.
        And this exacerbates the “quick get some rtl out” mentality.

        I’m often astounded at how much trouble PD teams have implementing blocks with only incremental design changes from previous generations.
        And I think +90% of teams are working on incremental designs.

        Why isn’t PD in a continuous integration tool?
        Why do I need some highly paid “push the button and mail the error log” engineer to run the tools?
        Why does a one line rtl change require an army of engineers to rebuild the chip?

        Ideal world is everything is in the CI pipeline (lint, formal, verification, PD, etc…).
        “git push” to gds out

        A new feature starts out on a branch where everyone (rtl, verification, PD) works on their piece of the puzzle.
        And when it looks good, we merge it the main branch, push to the server and go to the pub while the CI tool does it’s thing.

        Two things currently stopping this idea from happening:
        1. Rough Edges in the PD Flow
        There’s still parts in the PD flow that require a human.
        Biggest one here is lack of design rule fidelity in the PNR tools, that require manually feeding back DRC errors from the PD verification tools into the PNR tool.
        There are others but in general the PD tools/flow have a builtin assumption that someone is there to hold it’s hand.
        2. PD Engineers
        This idea is totally foreign to the mindset of most PD engineers.
        Most are flat out even using source control, let alone wrapping their heads around the idea of CI and an automated build.

        I totally feel your pain on the poor quality rtl + get PD something NOW mindset.
        But I think with rtl + verification productivity increases via more software practices coming to the table, we’ve basically built a multi line highway (rtl+verification) and joined it up to a one lane goat track (PD).

        And until we fix that up, we’re only solving half the puzzle.

  3. Every project is different, but I often see PD entry happen prematurely and then have to be redone when a second netlist is dropped with significant RTL changes. Certainly you are not going to be bug-free, but in projects that I manage I usually couple the release of first netlist with a measureable testing goal. Because PD rarely gates my final netlist as much as full functional closure and timing closure.

    I did PNR personally on one project, about 8 years ago, and it is a fact that it is hard to script activities. I tried for a while on the Cadence tools, I wound up just drawing power routes by hand, even though they were the same with each netlist drop.

    Making dummy modules isn’t really that helpful. I’ve tried doing that to evaluate different vendor libraries and it is almost never a useful predictor of routability and floorplan in my experience.

    The real problem is that projects are structured where “RTL complete” is a meaningless milestone that design teams are driven toward, rather than having a coverage and timing goal for a milestone. As near as I can tell, RTL complete just means that you think you wrote all the state machines, created all the memories, and wired all the interfaces.

    I prefer to have agile goals based on testplan and other targets, usually 2-3 weeks apart.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.