The reason Larry Cuba could do real-time rendering in 1976 was that he was using a vector graphics display (http://www.cca.org/vector/). In a vector display, there are no pixels. There is no video RAM. Instead, there is a list of (x y) pairs (a list of positions on the screen, each with an off/on flag). The controller simply loops through the list over and over: the (x y) are fed to digital-to-analog converters, which drive the left/right and up/down deflectors for the CRT's electron beam. The on/off flags turn the beam on and off. In other words, it's just a big oscilloscope, with the signal replaced by a list of numbers. The longer the list, the more time it takes to traverse it and draw it, the lower the refresh rate, and the greater the flicker. If you stick to black and white, you don't need a CRT mask to separately illuminate the red, green and blue phosphor dots. Without this mask, you can get some very sharp images. If Cuba were using pixels instead, he would have needed megabytes to hold an image. I doubt anyone could afford a megabyte. Moreover, I doubt that in 1976 the electronics was fast enough to even read an image's bytes and turn it into a CRT signal. And that's just displaying the image on the screen. To create the image in the first place, he would have needed, for each line segment, to fill in all the pixels from endpoint to endpoint. There's no way he could have filled that many pixels in real time. But with a vector display, filling is done by the movement of the electron beam, and costs you zero computation.