The hand gestures are awkward, I think, and I hope ends up being one of those guesses we got horribly wrong. The idea of gestures is partly the result of thinking about information in terms of objects that need to be “handled” in some way or other; gestures are a way of making data objects tactile, in at least some sense. But I think this is a gross underestimation of the symbolic capability of the human brain. When I type at the keyboard, for instance, I am engaging with an interface that is very abstractly related to the content I am dealing with, and yet I can manipulate that interface with a high degree of dexterity, speed, and control. Language generally works by manipulating meaningful symbolic representations with incredibly fine-grained control. We can do it with our fingers, we can do it with the muscles in our throat, and we can do it with tiny squiggles of pixels transmitting light into our retina. We should be looking for ways of taking advantage of new symbolic structures, instead of making us paw at media like a bear at honey. Seriously, anyone knows anything about ASL or other signing systems should be embarrassed at the complete lack of sophistication in our gesture interfaces. It’s like we are proud of our illiteracy. These are interface problems, and hopefully neural interfaces, including retinal tracking, have a bigger impact on our interfacing with screens in the future. h/t +michael barth, left the comment in his thread: https://plus.google.com/u/0/108101284889680772496/posts/LYfbMfExqpq Peter G McDermott originally shared this post: The Evolution of Screen Technology If you look at most of the sci-fi movies from the 70’s and 80’s, they featured CRT’s still as the technology of the future. With the advent of Plasma and LCD (now OLED), we are starting […]