Jeff Han gave an amazing demo at ETech, showing a multi-point touch-sensitive display. Here’s a transcript, but you’ll probably have to wait for the video to really get the full power of his creation.
Jeff Han at ETech, 7 Mar 2006
Consulting research scientist at NYU’s department of Computer Science. This stuff is literally just coming out of the lab right now. You’re amongst the first to see it out of the lab. I think this is going to change the way we interact with computers.
Rear-projected drafting table. Equipped with multitouch sensors. ATMs, smart whiteboards, etc. can only register 1 point of contact at the time. Multitouch sensor lets you register multiple touch points, use all figners, both hands. Multitouch itself isn’t a new concept. Played around with multitouch in 80s, but this is very low cost, very high resolution, and very important.
Technology isn’t the real exciting thing, more the interactions you can do on top of it once you’re given this precise information. For instance, can have nice fluid simulation running. Induce vortice here with one hand, inject fluid with another. Device is pressure sensitive, can use clicker instead of hand. Can invent simple gestures.
This application is neat, developed in lab. Started as screen saver, but hacked so it’s multitouch enabled. Cna use both fingers to play with the lava. Take two balls, merge them, inject heat into the system, pull them apart. This obviously can’t be done with single point interaction, whether touch screen or mouse.
It does the right thing, there’s no interface. Can do exactly what you’d expect if this were a real thing. Inherently multiuser. Rael, come up and help me out. I can work in an area over here, and he can be playing with another area at the same time. It immediately enables multiple users to interact with a shared display, the interface simply disappears.
Here’s a lightbox app. Dragging phtoso around. Two fingers at once, I can start zooming, rotating, all in one really seamless motion. It’s neat because it’s exactly what you expect would happen if you grabbed this virtual photo here. All very seamless and fluid.
Someone who’s new to computing culture can use this. Could be important as we introduce computers to a whole new group of people. I cringe at the $100 laptop with its WIMP interface.
Really simple and elegant technique for detecting touch point, scattered light by deformation caused by touch on screen.
Kinaesthetic memory, the visual memory where you left things. Ability to quickly zoom, get a bigger work area if you run out of space, etc. changes things. More of an infinite desktop than standard fixed area.
Now, of course, can do the same thing with videos as with photos. All 186 channels of TW cable.
Inevitably there’ll be comparisons with Minority Report. Minority Report and other gestural interfaces aren’t touch based. Can’t differentiate between slight hover and actual touch. Disconcerting to user if they have action happen without tactile feedback. I argue that touch is more intuitive than gross gestural things. Also gestural is very imprecise.
Ability to zoom in and out quickly lets you find new ways to explore information. What’s interesting is that we’re excited about potential for this in information visualization applications. Can easily drill down or get bigger picture. Having a lot of fun exploring what we can do with it.
Another application we put together is mapping. This is NASA WorldWind, like Google Earth but Open Source. We hacked it up to use the two fingered gestural interface to zoom in. Can change datasets in NASA Worldwind. They also collect pseudocolour data, to make a hypertext map interface. [Demo stalls, restarts] Three dimensional information, so how do you navigate in that direction. Use three points to define an axis of tilt. Could be right or wrong interface, but example of kind of possibilities once you think outside the box.
Virtual keyboard, rescalable. No reason to conform to physical devices. Brings promise of a truly dynamic user interface, possibility to minimize RSI. Probably not the right thing to do, to launch in and emulate things from the real world. But lot of possibilities, we’re really excited.
Lots of entertainment applications, multiuser with many people playing in parallel. Here’s a simple drawing tool. Can add constraints and have multiple constraints, to make a reallye asy virtual puppeteer tool. Lot of math under the surface to do what’s physically plausible (algorithm published last year at SIGGRAPH).