Imaginary Interfaces
hpi.uni-potsdam.deThe elephant in the room seems to me to be camera placement. Current smartphones probably already have good enough cameras, they probably can already run the software and within a few years they might also have the batteries required to keep such a sophisticated system running for longer periods of time.
But how do you get there to be a camera where you need it to be? Size isn’t even the problem, smartphones are already small and light enough. But are people supposed to cut a hole for the camera in their shirt pocket? Will we suddenly start to wear clothes with electronics in them? (The technology to make that happen already exists but we don’t seem to want to.)
This seems like a neat additional feature that would be pretty cool. If it wouldn’t add any extra hassle. It seems to me to be a very hard problem to get a camera to where you need it to be without adding any extra hassle for the user.
It seems to me it might be easier to put localizer gloves on the user instead. They wouldn't even have to be gloves in the traditional hand-covering sense, just enough to stay on the hand and provide orientation, just little skeleton things.
It seems like the real insight is the creation of a gesture system with spatial persistence by using one hand as a reference point shared by both the computer and the human, rather than the lapel-camera. The way in which the innovative gesturing system gets into the computer seems less important than the gesturing insight in the first place; this is the first gesture system that I've seen that seems like it might actually be useful. And now that someone's had the base idea, a couple of iterations of refinements and we might actually be somewhere.
(It seems like the gestures in Minority Report represented what people thought this would look like, but those always seemed almost impossibly floaty; one metric I use to determine how well a computer could possibly perform is "could a motivated human even figure out exactly what you meant?" and I'm not convinced the Minority Report interface meets that; pointing is a pretty imprecise thing, more than you might realize because the average person's 3D model of space is shockingly bad in some ways. Anyone who has learned to draw late enough in life to be able to observe the process of learning to draw, as I am fiddling with now, quickly learns that.)
Couldn't you hang the device on a lanyard around your neck?
very interesting how making an L-shape with the left hand somehow makes the imaginary space more concrete in the user's mind. You can really see and understand that the user percieves the interface as existing, moreso than if they were interacting in an unframed way. In other words, I think the L-shape is key to grounding users, it gives a rudimentary starting shape to the interface.
Fasinating research, I'd love the opportunity to play with it. Obviously it's very early stages, but I wonder how long before we will realistically see stuff like this in production. 10 years? Sounds a lot but there's a lot of work to be done not only in terms of polishing the technology, but also in preparing the market for this kind of interface.
For consumer electronics, this seems like it could find very appropriate applications on minimal-UI devices like the iPod Shuffle.
I think you hit the nail on the head.
First step would be to establish a few simple gestures that everyone can learn rather than try to create an entire OS with it.