Let me take you back a few years to my first job as an ASIC verification engineer. It was 2000 and things were a lot different. The notion of “architecting a testbench” didn’t really exist the way it does today. Design was cool and verification was where junior engineers started. Constrained random verification hadn’t hit the mainstream. There wasn’t much functional coverage to speak of. I think Specman and Vera were around but the user-base was relatively small. There was no Systemverilog and there was no UVM. Basically, we were back in the stone age of directed testing. Any knucklehead could do it. Thankfully, I was perfectly qualified.
To test a processor interface back in the stone age we’d first need to build a behavioural model. The processor model was usually quite basic. On one side we’d have pins; on the other side we’d have a user interface with 2 methods: read and write. If I remember right, every write and read method I ever saw back in the stone age had an address argument and data argument. If we were well organized, we’d have a long list of macros to define address space so we could access registers and fields by name. Maybe we’d have some means of injecting protocol errors on the bus or an additional option for burst size but that’s about it.
Once we were done writing our processor model, we connected it to our DUT in a testbench module and start writing tests (in the stone age there was no class-based environment). A couple weeks would go by and we’d see that the start of the first few tests looked pretty similar. It became obvious that consolidating that long list of writes at the beginning (i.e. your initialization sequence) was a good thing to do to keep things tidy so we pulled them out and added an initialization task to the testbench. Otherwise, in each test we’d peek and poke through various parts of our DUT using the write and read functions on our processor model. Practically speaking, there was only a short list of registers we’d interact with frequently. The functionality of the others would probably be verified in their own directed tests (obviously we had to understand what each register did so we could manually predict the outcome).
The last thing we always used to do was the front-to-back accessibility tests. We’d go through the entire address space, writing and reading back every register to make sure they were all connected properly. Sometimes the design would change, which meant our tests would need to change, too (or better, just the register/field macros would change). That was annoying but it happened.
Simple testbench; simple processor model with a simple user interface; tests that junior engineers could write and others could understand. Not elegant but adequate.
A lot has changed in the last 14 years.
I just counted 26 different classes in the UVM register package. Add to that the sequence related classes and you’re at 37. That’s what we use now to write and read registers! It’s elegant for sure. It’s highly portable and it’s designed to serve a universal purpose. But add up the code in the src/reg directory and you get 21551 lines (I just double checked that number to make sure I had it right). And that’s just the base classes. After defining the registers in your DUT, obviously you end up with far more.
Lots of features; lots of code. On it’s own that’s not necessarily a bad thing. But the part that does irk me, that I do think is a bad thing, is that the junior engineer that used to build the processor model and lead the testing is out of a job. They’ve been replaced by a senior engineer – sometimes many senior engineers – who construct, automate, connect, integrate and/or customize the driver, sequencer(s) or virtual sequencer(s), register maps, various transactors and agents. Then there’s the sequences – usually a base sequence with a list of derived sequences – that’ll be made available to other agents and tests.
None of the above even remotely resembles the stone age write/read API. Worst case, it’s a make work project that sucks in productive people (it’s not the norm but it does happen). I’m hoping there are other hardware developers out there that see that as a problem.
If you see it as a problem, do yourself a favour… the next time you’re getting started with the UVM register package, go to a quiet room and think about the value you’re about to add. I mean really think about it because you’re about to invest a serious chunk of your time.
Take yourself back to the stone age when we didn’t need 26 different classes to write and read a register (if you’re too young to remember the stone age, ask old-timer to tell you the stories). We didn’t need to ensure everything we did was universally portable. We didn’t need to randomly juggle the order in which we accessed registers in our accessibility tests. There wasn’t an automatic need to consolidate an accurate image of the memory space and make it R/W accessible to every component in the testbench via all possible bus masters. We didn’t need to hide and randomize the selection of a bus master. We didn’t need functional coverage for every possible situation in every possible register (to account for the fact that we don’t know what our random tests are actually doing). We didn’t need an afternoon and a whiteboard to explain to designers how to access a register in their code. We didn’t need a cookbook or a training class. We could look at a test and understand what’s happening.
Finally, and most importantly, it didn’t take senior engineers to “architect a testbench” that enables all of the stuff that didn’t used to be necessary. We could all do what needed to be done, even the knuckleheaded 2000 version of me.
So? What say you? Are you with me or are you with the UVM register package?