It’s finally time to see if TDD is a viable technique for writing RTL with verilog. But first, a little backstory…
For the Agile2014 conference in Orlando this past summer, Soheil and I built an Agile hardware/software co-development demo using a Xilinx FPGA with an ARM dual core Cortex-A9 to show how TDD could be used to write embedded software, drivers and RTL (i.e. TDD of a complete system).
As it stands now, the software and drivers running on the ARM core are complete. We built Conway’s Game of Life with all the supporting software and drivers necessary to create a series of video frames, then DMA those frames to the FPGA.
On the FPGA, we have a pipeline of IP blocks that transfers the software built video frames to an HDMI connector on the development board. Starting from a video DMA controller, frames travel block to block over a series of streaming AXI interfaces ultimately being displayed on the monitor connected to the HDMI.
We don’t have it shown in the graphic, but we also inserted a module into the IP pipeline that changes the colour of certain pixels before they go to the HDMI controller. For the demo this was an absolute minimum because it was hardware logic we could build with TDD. This absolute minimum was enough to impress a few software people that saw the demo, which was good. Hardware designers, though, would probably have laughed at our TDD-built module… mainly because it was laughably simple and doesn’t even come close to proving TDD of RTL is viable.
So that’s where we were as of August 2014… we had a demo that showed TDD is viable for writing embedded software while in all honesty, we were pretending TDD was viable on the hardware side.
No more pretending to do TDD of RTL. It’s time for the real thing.
Over the last month, I’ve been hunkered down doing a real proof-of-concept to show that TDD is a viable technique for writing RTL. I’m replacing the trivial module we originally had in the demo with something far more substantial: a module that does some real processing of the video frames as they pass from the video DMA to the HDMI controller. Here’s the initial block diagram and plan I scribbled down a month ago…
Ultimately, the completed design will add a glow of configurable depth around living cells created by the software. The ingress logic pulls frames from the video DMA and pushes them to a memory on the FPGA. The egress logic pulls processed frames from the memory and sends them down the line to the HDMI controller. Between the ingress and egress logic there’s an engine that reads pixels loaded in memory, modifies the pixels to add the glow, then writes them back to the memory.
The focus is TDD. As you can see by the steps I have listed in the plan, though, part of this proof-of-concept is showing that incremental development of RTL is also viable and that refactoring RTL as I go is not only manageable but beneficial. Step 1 ignores the processing in the middle and focuses on the ingress/egress logic (which basically equates to a FIFO). In subsequent steps, I add a narrow outline, then a wider glow, and finally the configurable glow.
Admittedly, TDD of RTL for a verification engineer is a bit of a stretch… but hey… someone has to do it :).
As of right now, I’ve completed step 1 (the ingress and egress logic) and am working my way through step 2 (a single pixel outline of live cells). I’ve learned quite a bit so far and in the next few weeks, I’ll share some of that.
So far, I’m pleased to say that everything is going quite well :).