The current interface upheaval is centered on touchscreens. I think this is an important step, and one which may allow for some significantly different interaction paradigms to emerge. I wonder how long touchscreens will remain dominant, however, even though the interfaces they help spawn may stick around for a long time.
Screens are becoming more and more advanced, meaning they’re getting smaller and smaller while supporting higher and higher resolutions. It’s not unrealistic to think that soon displays could be embedded in glasses, and once there, why not contact lenses? How far away are contact lenses that either have their own onboard computers or can receive a signal from a small device (like a phone)?
There are clearly obstacles to be overcome before that point, but it doesn’t seem all that far away. Another question is whether or not it will be possible for these devices to tell where you’re looking (this might be easier for a pair of glasses to do than contacts). If they can tell where you’re looking, they can overlay whatever you like onto what you’re looking at.
This isn’t a new idea, but it’s interesting to me how close it seems right now.
If we reach that point, then screens will have gone from being small in comparison to computers in the early stages, then large in comparison to computers today (that is, for desktop setups, where people generally want them to be as large as possible) back down to small again. On the other hand, the computer will also be small, so the largest part of a computer may end up being the input devices.
We might be relatively close to a setup where you sit down at your desk, and you have a keyboard (maybe I’m a dinosaur, but I don’t think keyboards are going away anytime soon; they’re too efficient—especially if you use something like Vim), a very small computer, and a motion tracker somewhat similar to the ones currently used for the Wii. All those, and your contact lenses, which might be passive receptors but might also need to emit something for the motion tracker to pick up. What that gives you, essentially, is as much screen real estate as you want, and something very close to those Hollywood-style holographic displays you can touch—with the one major difference that only you would be able to see your interface.
The only part of this that’s really missing right now is the contact lenses part. Computing power is already sufficient to calculate the three-dimensional view from a given point—maybe not well enough make you think you’re seeing something real, but certainly well enough to give you the correct angle for viewing a GUI.
After that, the next part is for it to figure out where your hands are. It might get to the point where it can let you type without a keyboard, although people might keep them for the feeback. But it should be easy enough at that stage to detect where your fingers are, thus allowing for three-dimensional “touch” interfaces.
I’m sure there are already research projects working on this kind of thing. I hate making predictions about future technologies, but I’d be surprised if this stuff weren’t available within twenty years.
What happens with interface design will be fascinating. Being able to manipulate computer artifacts with your hands, in three dimensions, will eliminated the need for a lot of abstractions, and I think that’s how much of the early work will go, just as that’s how the touch interfaces now are going. However, the increasing volume of information, personal and otherwise, will require abstractions of some kind; otherwise there’s just too much to deal with. What’s most interesting to me is not the “ease of use” side of things, but rather what will happen when you combine these (effectively) touchable holographic interfaces with a focus on user power and efficiency.
Here’s one idea: meta keys that can let you move the cursor with your eyes, and then “click” with another key (I refer to keys, but these could be foot taps, or who knows what, although here I’m aiming at something that doesn’t requires much movement of the hands, because I still think that keeping the hands in a typing position is likely to be fastest) to do what the mouse does today, but without having the mouse. That would already make editing better; I’d love to have the ability to do that in Vim right now.
Note that current touch interfaces eschew the concept of a pointer, figuring that the pointer is an unecessary throwback given the presence of touch ability. But control by visual attention could well bring it back, and similar things apply to many other interface approaches. In some ways the danger is that old lessons and methods will be forgotten in favor of alluring new playthings.
I have no qualms about suggesting that in a new order of interfaces where three-dimensional holographic touch is available, I’ll probably still want to use Vim. I suspect that Vim is the most effective interface I use every day. In fact, it’s the most powerful text manipulation tool I’ve found in years of searching (no, I haven’t used Emacs, and it may well be Vim’s equal, but most of this applies to it also) Cosidering that it doesn’t use any of the new interface technologies, and in fact obviates the need for the now-ancient mouse, that’s impressive, and important: a tremendous amount of interface power can be created not by focusing on making everything easier, but on exploring what increaing levels of abstraction and user learning can do—and, hopefully, combining that with the best of the revolutionary pieces yet to come.