Main Content

One major limitation during the development of an embedded system, especially for programmers who are used to PCs, is the lack of video output. That’s exactly what Vidout provides, using only 24% of the CPU on a STM32F103.

It would be great to be able to be able to, at least temporarily, add video to a system during development and then remove it when it goes to the field. Unfortunately, creating video output is generally a rather resource intensive activity, and not one that can normally be patched in for development only. It is also ‘Hard Realtime’, in that the generation of video output must meet tight timing constraints if the display is to look right.

Over the years I’ve built a lot of systems which I’ve claimed to have Realtime characteristics, but even so I’ve never really bumped against the limits of what is possible on a commodity processor, so, with a few evenings free over the Christmas break, I thought I’d see how far I could get at implementing a VGA output for debug use on a pretty much bottom-of-the-barrel CORTEX-M3; The STM32F103C8 based Bluepill board.

Software-based video output is a hard realtime problem that demands monotonic response. Given that the duration of pixel times is measured in nanoseconds and any mis-timing is very obvious in the form of ‘fizzing’ or deformities in the presented image then this was going to be a major challenge, just the thing for chrimbo. Thus, Vidout was born.

The project objective was simple enough; produce stable video using that board and no (or minimal) additional components. The starting point was the December 2012 blog by Artekit that produced 400×200 bitmapped video using this processor..so it was obviously possible to create some kind of output. A quick read-through of that blog and a few limitations came to light;

The video output was a bitmap buffer only, which means you need a lot of RAM to store it (10K Bytes). This is a heck of a hit on a part with only 20K available in the first place, some of which is presumably already used by your target application!
It uses three interrupts and relies on the interaction between two timers to generate the precise signalling needed. Thats a lot of resources being used on a constrained part.
The code is GPL…there’s nothing wrong with GPL, but it can make it difficult to fold the code into projects, especially if you want to be able to leave it in there as ‘sleeper’ code.
I didn’t write it.
What I really wanted was a character oriented driver using minimal resources that could be bolted into existing projects for debug and monitoring purposes. More powerful CPUs have been used to create video exploiting configurable ‘raster generators’ that create the image dynamically on the fly (e.g. by Cliff Biffle with ‘Glitch’). The combination of these two ideas form the basis of this new implementation.

Martin Hinner has collected the VGA standard timing information on his site, and that’s a great resource. Vidout uses the same timing as the Artekit article – the VGA 800×600 output frame which leads to a line rate of 28.444uS and a frame rate of 56Hz. With these timings and each horizontal pixel ‘doubled up’ to give a horizontal resolution of 400 pixels each pixel has a duration of 2.8nS…if we can’t get these levels of accuracy then the video will be corrupt. For simplicity, and due in part to constraints set by this specific CPU, the pinout is the same as the Artekit code.

Assuming for a moment that we have the source material for an image to be displayed (remember, in Artekit that’s just a static block of memory) then there are three distinct and separate tasks to be performed;

Creating and maintaining the ‘frame protocol’ so the monitor will display the image
Calculating the pixels to be output for each line of the frame
Outputting the pixels for each line
In Vidout these tasks are all performed in vidout.c, and we’ll run through them in turn;”

Link to article