The Future of Computing

Last month I had the pleasure of being at Intel’s Microprocessor Research Labs (MRL) in Santa Clara for an Open House they were holding for the press and internal staff. MRL does research in technology likely to hit “the street” (retail channel) anywhere from three to eight years into the future. What was being shown was very impressive, to say the least.

To start off, MRL is working very hard to make sure Moore’s law continues its exponential march. Shown was work the lab is doing, in partnership with other chip manufacturers, on “extreme ultraviolet” lithography. Taking over from traditional visible light techniques, this will lead to sub 0.1 micron components. At the same time, 300 mm wafers will mean ever more chips will be possible from a single slice of silicon.

OK, so we’re going to continue to have ever more processing power, more than most people know what to do with even now. What do you do when you can’t buy a machine slower than a mainframe of only a few years ago? The MRL have some suggestions, and as you might imagine, they’re rather computationally intensive.

The “Office of the Future” (OotF) demo is a good example. Within a corner office cubical, two high-resolution video projectors are arranged such that their displays show up adjacent to each other on the cubical walls. A third projector is mounted above, and projects down onto the desktop. A single computer drives all three displays resulting in a 3D workspace; windows can be moved between them as desired.

Now, things get interesting. Instead of a mouse, a special 3D camera watches the user and their hand gestures. Computer vision software (this is where the heavy lifting for the CPU comes in) takes the camera data, recognizes what the user is doing, and translates this into instructions for the computer; scroll up, move window left, etc.

For an added twist, for the open house Intel had two such set-ups. They were linked together by way of pin-hole video cameras and microphones, hidden behind a tiny hole in the middle of the wall-displays. Thus, the two OotF users could video-conference between themselves in a very natural seeming way — you could tell if the other person was looking at you or someone beside you.

Another demonstration was the “Voice Portal”, which could understand spoken words from a user without prior training. It has a very large vocabulary, and can handle continuous speech (instead of each word having to be spoken distinctly.) With the dreams of a “Star Trek” like interface one step closer, the more immediate uses include voice interaction for automated telephony solutions and speech-to-text applications.

Multi-modal input is likely to become more common as CPU power allows more analysis to take place on the input data streams. To encourage such applications, Intel announced at the open house the availability of the Linux version of their Open Source Computer Vision Library. This library is intended to be a “substrate” upon which both CV research and commercial applications be developed.

Also announced was the Open Runtime Platform, which is an open source framework for building run-time environments, taking care of memory management, garbage collection and linking issues. Although not of any use to consumers, it’s of huge interest to geeky developers. The technology will work it’s way down to the street, embedded in other products.

Intel have always been a company to watch, being the leader in microprocessors for many years. Although AMD can now claim to sell the fastest x86 processor, and with startups like Transmeta nipping and their heals, the research done by Intel’s labs will continue to influence what we see in the marketplace for years to come.

Published in the Victoria Business Examiner.

Write a comment