It can easily be argued that computer gaming has fueled the ever increased performance of personal computers and graphics cards for several years now. No office user needs one of Intel’s new 3 Giga Hertz (3 billion clock cycles a second) P4 processor or a NVidia N34 based video card — but 3D gamers are drooling!
In the realm of 3D games, the number of frames per second (FPS) which can be rendered (drawn) is a function of the speed of the processor, memory, and graphics card. Once the frame rate gets to about 90 FPS, the monitor can’t keep up, so instead the gamer will increase the resolution of the display, improving the quality of the images. Higher frame rates and resolution give gamers a definite advantage over other opponents using less capable “kit”.
And while the advance of CPU processors is always amazing (at the beginning of the PC industry, the early 80s, processor rates were measured in single digit Mega (million) Hertz values), it is the advances of Graphical Processor Units (GPUs) that is truly stunning. And quite recent — there was only one serious player in the 3D GPU market in 1997, 3dfx, now there are several, such as NVidia, ATI and Matrox. (Ironically, 3dfx are no longer in business).
Being highly specialized devices, GPUs have gotten very good at what they do — render 3D images at ever higher speeds, resolutions and levels of detail. While the first 3D cards could only manage rendering low complexity images at 640 by 480 pixels (Picture Element) at 30 frames per second, today’s cards can handle scenes with highly detailed geometry at 1600 by 1200 pixels at 90 FPS.
The realistic look of 3D rendered scenes is greatly enhanced by the use of textures — basically a flat polygon is painted with a 2D image to increase the apparent complexity, tricking the eye into believing that the surface is made of, for example, brick or metal (or blood). The first generation of cards only had a single texture unit; modern cards have up to six. This allows much more interesting and complex images, as multiple textures can be applied in a single pass to simulate dirt and lighting effects.
But, it doesn’t stop there. The next generation of 3D video cards, just now coming to market, have what are called Pixel Shaders. These are tiny little programs (of up to, currently, 22 instructions) which are executed for each pixel on a polygon. This ability allows a whole new class of effects to be created, and brings to real-time computer rendering what has previously only been possible with off-line rendering traditionally used for movie and TV effects.
Most of the computer rendering seen in movies, and a great deal of that on TV, is done with a facility known as Renderman, created by Lucasfilm, and later spun off into the company Pixar. Renderman is actually a description language — Renderman is to 3D as Postscript is to 2D. The actual images are created with a Renderman renderer, such as REYES (Renders Everything You Ever Saw), also by Pixar.
Anyone who’s seen a movie in the last 10 years has seen Renderman effects. It was responsible for the dinosaurs in Jurassic Park, and the mouse in Stewart Little. Entire movies are now created with Renderman, such as Toy Story (the first movie to be completely generated by computer), Toystory 2 and A Bugs Life.
The magic with Renderman — the reason it can be used for creating movie effects — is its very powerful Shaders. When the renderer figures out that a polygon exists on screen, one or more shader programs are run for each pixel that polygon covers. These shaders, written in a very C like language, determine how the pixels look (their color, brightness, etc.), and return this information back to the renderer to composite into the final image. For a feature film, hundreds of shaders may be called millions of times; its not fast, but the results are good enough for the big screen.
What is interesting here is that this key feature of Renderman is now becoming available in Consumer-level graphics cards. In fact, there have already been compilers written for these cards so they can use the Renderman Shading Language without modification. This leads to the very real possibility of rendering movie quality images in real time on a high-end consumer PC.
It will probably take one or two additional generations of kit before the PC hardware can produce in real time some of the more complex imagery currently possible with Renderman. But when that happens, it will be possible to not just watch a movie on your computer, but have it rendered from any perspective, and possibly even becoming one of the characters and interacting with the others, influencing the outcome.
While an exciting possibility, it could make the fight for the remote control even more of an issue. “Choose a channel and stick with it!” will expand to include viewing perspective, character choices and plot decisions. The living room may never be safe again.
Published in the Victoria Business Examiner.