Spoiler alert: The mouse dies. Touch and gesture take center stage

The shift toward more natural interfaces requires new thinking and skills.

Mouse Macro by orangeacid, on FlickrThe moment that sealed the future of human-computer interaction (HCI) for me happened just a few months ago. I was driving my car, carrying a few friends and their children. One child, an 8-year old, pointed to the small LCD screen on the dashboard and asked me whether the settings were controlled by touching the screen. They were not. The settings were controlled by a rotary button nowhere near the screen. It was placed conveniently between the driver and passenger seats. An obvious location in a car built at the tail-end of an era when humans most frequently interacted with technology through physical switches and levers.

The screen could certainly have been one controlled by touch, and it is likely a safe bet that a newer model of my car has that very feature. However, what was more noteworthy was the fact that this child was assuming the settings could be changed simply by passing a finger over an icon on the screen. My epiphany: for this child’s generation, a rotary button was simply old school.

This child is growing up in an environment where people are increasingly interacting with devices by touching screens. Smartphones and tablets are certainly significant innovations in areas such as mobility and convenience. But these devices are also ushering in an era that shifts everyone’s expectations of how we engage in the use of technology. Children raised in a world where technology will be pervasive will touch surfaces, make gestures, or simply show up in order for systems to respond to their needs.

This means we must rethink how we build software, implement hardware, and design interfaces. If you are in any of the professions or businesses related to these activities, there are significant opportunities, challenges and retooling needs ahead.

It also means the days of the mouse are probably numbered. Long live the mouse.

The good old days of the mouse and keyboard

Probably like most of you, I have never formally learned to type, but I have been typing since I was very young, and I can pound out quite a few words per minute. I started on an electric typewriter that belonged to my dad. When my oldest brother brought home our first computer, a Commodore VIC-20, my transition was seamless. Within weeks, I was impressing relatives by writing small software programs that did little more than change the color of the screen or make a sound when the spacebar was pressed.

Later, my brother brought home the first Apple Macintosh. This blew me away. For the first time I could create pictures using a mouse and icons. I thought it was magical that I could click on an icon and then click on the canvas, hold the mouse button down, and pull downward and to the right to create a box shape.

Imagine my disappointment when I arrived in college and we began to learn a spreadsheet program using complex keyboard combinations.

Fortunately, when I joined the workforce, Microsoft Windows 3.1 was beginning to roll out in earnest.

The prospect of the demise of the mouse may be disturbing to many, not least of whom is me. To this day, even with my laptop, if I want to be the most productive, I will plug in a wireless mouse. It is how I work best. Or at least, it is currently the most effective way for me.

For most of us, we have grown up using a mouse and a keyboard to interact with computers. It has been this way for a long time, and we have probably assumed it would continue to be that way. However, while the keyboard probably has considerable life left in it, the mouse is likely dead.

Fortunately, while the trend suggests mouse extinction, we can momentarily relax, as it is not imminent.

But what about voice?

From science fiction to futurist projections, it has always been assumed that the future of human-computer interaction would largely be driven by using our voices. Movies over decades have reinforced this image, and it has seemed quite plausible. We were more likely to see a door open via voice rather than a wave. After all, it appears to be the most intuitive and requires the least amount of effort.

Today, voice recognition software has come a long way. For example, accuracy and performance when dictating to a computer is quite remarkable. If you have broken your arms, this can be a highly efficient way to get things done on a computer. But despite having some success and filling important niches, broad-based voice interaction has simply not prospered.

It may be that a world in which we control and communicate with technology via voice is yet to come, but my guess is that it will likely complement other forms of interaction instead of being the dominant method.

There are other ways we may interact, too, such as via eye-control and direct brain interaction, but these technologies remain largely in the lab, niche-based, or currently out of reach for general use.

The future of HCI belongs to touch and gesture

Apple's Magic TrackpadIt is a joy to watch how people use their touch-enabled devices. Flicking through emails and songs seems so natural, as does expanding pictures by using an outward pinching gesture. Ever seen how quickly someone — particularly a child — intuitively gets the interface the first time they use touch? I have yet to meet someone who says they hate touch. Moreover, we are more likely to hear people say just how much they enjoy the ease of use. Touch (and multi-touch) has unleashed innovation and enabled completely new use cases for applications, utilities and gaming.

While not yet as pervasive, gesture-based computing (in the sense of computers interpreting body movements or emotions) is beginning to emerge in the mainstream. Anyone who has ever used Microsoft Kinect will be able to vouch for how compelling an experience it is. The technology responds adequately when we jump or duck. It recognizes us. It appears to have eyes, and gestures matter.

And let us not forget, too, that this is version 1.0.

The movie “Minority Report” teased us about a possible gesture-based future: the ability to manipulate images of objects in mid air, to pile documents in a virtual heap, and to cast aside less useful information. Today many of us can experience its early potential. Now imagine that technology embedded in the world around us.

The future isn’t what it used to be

My bet is that in a world of increasingly pervasive technology, humans will interact with devices via touch and gestures — whether they are in your home or car, the supermarket, your workplace, the gym, a cockpit, or carried on your person. When we see a screen with options, we will expect to control those options by touch. Where it makes sense, we will use a specific gesture to elicit a response from some device, such as (dare I say it) a robot! And, yes, at times we may even use voice. However, to me, voice in combination with other behaviors is more obvious than voice alone.

But this is not some vision of a distant future. In my view, the touch and gesture era is right ahead of us.

What you can do now

Many programmers and designers are responding to the unique needs of touch-enabled devices. They know, for example, that a paradigm of drop-down menus and double-clicks is probably the wrong set of conventions to use in this new world of swipes and pinches. After all, millions of people are already downloading millions of applications for their haptic-ready smartphones and tablets (and as the drumbeat of consumerization continues, they will also want their enterprise applications to work this way, too). But viewing the future through too narrow a lens would be an error. Touch and gesture-based computing forces us to rethink interactivity and technology design on a whole new scale.

How might you design a solution if you knew your users would exclusively interact with it via touch and gesture, and that it might also need to be accessed in a variety of contexts and on a multitude of form factors?

At a minimum, it will bring software developers even closer to graphical interface designers and vice versa. Sometimes the skillsets will blur, and often they will be one and the same.

If you are an IT leader, your mobile strategy will need to include how your applications must change to accommodate the new ways your users will interact with devices. You will also need to consider new talent to take on these new needs.

The need for great interface design will increase, and there will likely be job growth in this area. In addition, as our world becomes increasingly run by and dependent upon software, technology architects and engineers will remain in high demand.

Touch and gesture-based computing are yet more ways in which innovation does not let us rest. It keeps the pace of change, already on an accelerated trajectory, even more relentless. But the promise is the reward. New ways to engage with technology enables novel ways to use it to enhance our lives. Simplifying the interface opens up technology so it becomes even more accessible, lowering the complexity level and allowing more people to participate and benefit from its value.

Those who read my blog know my view that I believe we are in a golden age of technology and innovation. It is only going to get more interesting in the months and years ahead.

Are you ready? I know there’s a whole new generation that certainly is!

Photo (top): Mouse Macro by orangeacid, on Flickr

Related:

tags: , , ,
  • GT

    It is not comfortable to sit in front the computer and use the touch screen all the time, and what about graphics design and programming, keyboard and mouse (I use pen) are needed.

    It will be wonderful if the keyboard and mouse are merged together to make some kind of a multi touch pad that sense the hand location, display buttons or other items as needed and used instead of the mouse.

    Or something like that :)

  • Leland

    Soon after Minority Report came out, I remember my first time swiping a trackpad to a hot corner to invoke Exposé under then-new Mac OS 10.3, and thinking, “Holy cow, it’s not science fiction anymore!”

    Jonathan, as you noticed when you had to learn a spreadsheet program by keystrokes, there was still a device needed to translate our wishes from us to machines. The mouse was so much more intuitive that it seemed natural, but that’s only because it was so much easier than only having a keyboard — which itself was a pretty low standard, just one step above punch cards.

    Remember when Steve Jobs said in an interview that it’s much easier to point to a spot on your shirt than it is to describe its location by coordinates? It’s certainly true, and it’s why we’ve had mice for a quarter-century.

    But, because it’s been around so long — literally spanning the entire existence of home computing — we’ve forgotten that the mouse is really just another translation device. We’ve only recently gained the technology that can eliminate the mouse itself in the common household.

    With that, we’ll continue seeing UI changes that are nearly as dramatic as changing from the command line to icons and windows.

    Anyway, I guess that’s a “+1″ or “ditto” to this article.

  • monopole

    There is a very good reason why the car interface runs on a jog dial instead of a touch screen.

    Very simply, you are supposed to keep your eyes on the road. It is possible to reach for a clicky knob and move through menu entries without looking at it. In fact, with a sound or voice interface it is possible to do so without even looking at the screen. Which is a much superior design. The human factors of a simple, tangible knob or button with haptic feedback are much better for such a situation. A touch screen is very problematic in that it requires that you look, position your finger, tap and see if the tap took, time you could be spending doging that oncoming semi. Worse yet if you are wearing gloves it will not work at all. Finally, on a rough road a touch screen is likely to respond to unintentional taps and touches induced by bumps and veers.

    In the same respect, if you are intensively using a spreadsheet or any keyboard intensive application, using control sequences keeps your hands on the keyboard and your eyes on the data. While a GUI interface is handy for the occasional user, keystrokes can be decidedly better in the hands of a trained typist.

    Another example is the specialized interfaces used by gamers such as the Logitech Nostromo series.

    Just because touchscreens and gestures are the new hot technologies doesn’t mean that the old interfaces must be put to the sword. Back in the 90’s VR Gloves and Goggles were obviously going to supplant the keyboard and mouse.

    Just like tablets, touchscreens and gestures are ideal for some applications and miserable for others. Sometimes a pressure sensitive stylus is utterly perfect and a finger is miserable.

    While I’m more of a trackball guy, the mouse isn’t going anywhere for a long time.

    Arguably, the rise of 3d printers, picoprojectors and augmented reality tags among other technologies may lead to more knobs, levers and dials in tangible user interfaces, as per this example:
    http://boingboing.net/2011/09/26/chemistry-of-the-future-3d-models-and-augmented-reality.html

  • http://www.daveenjoys.com/ Dave Mackey

    It does seem like touch is becoming the way of the future, but I have to question its ubiquitousness…as an earlier commenter noted – it just isn’t comfortable to use touch when sitting at a desk, etc. over long periods of time. I wonder if perhaps we will see a change in the way fixed computers look – e.g. the screen and keyboard melding into one and being easier to reach all in one motion or perhaps a touch interface that will be attached like a mouse but respond very intuitively on-screen to our hand motions…allowing us to lay our hands flat and make various small motions with finger tips to accomplish tasks.

  • Matt Silver

    It’s apparently 2 years old now, but http://10gui.com/ has a great presentation on touch interfaces for computers. I use the trackpad with gestures almost exclusively now, both on the laptop’s built-in trackpad and on the so-called “Magic Trackpad.” I do notice when I go back to a mouse or to a computer that doesn’t support gestures, how my actions are hampered.

    I’d love to try the 10 finger interface in the 10gui video.

  • Bill Tyler

    The mouse is no more dead than the keyboard. It will continue to fill the UI roles for which it is best suited and supported on computers. If the future is (only) full of tablets it may disappear. If desktop and laptops continue, so will it. Mice are still a powerful and highly accurate pointing devices while the finger is less so and obscures what it touches (which is why tap targets are so large).

    As for voice… voice recognition isn’t the only problem. Ever ENVISION a world full of voice control? What would an office full of people talking to their computers look –and SOUND– like? Imagine having to direct your communications with multiple speech devices (phone, computer, etc.).

    Better yet, given the examples of voice interfaces realize how much AI will be needed for such simple commands to be turned into meaningful results. The current state of the supporting infrastructure for voice is more like Blade Runner than Star Trek.

  • http://www.keithmcmillen.com/ Andrew

    We are finding and supporting the trend of using your hands *while* using your feet for HCI with SoftStep KeyWorx, a hardware and software solution that provides advanced keyboard and mouse control for your feet.. We have found for those with carpal tunnel or who want to avoid it, or those that simply want to add 20% of efficiency to their output, adding foot control can be a big help. Check it out here:

    http://www.keithmcmillen.com/keyworx/overview

  • http://FUTRS.com Emerging Technologies

    Great article

    I too can remember the days of inputting syntax into my Commodore64 in the early 80’s. Hours of typing this code into the computer at twelve years old only to see nothing but a screen change color or to see a little robot man wave his arms – how far we’ve come.

    The one thing to really examine is the pace of technology and while Touch and Gesture may be on the verge of becoming the norm, it’s obvious the more you read that where we are headed is more towards a ‘Thought Control’, as scary as that sounds – haha – and you think there are privacy issues now, wait til that takes place.

    I saw one poster mention the issues with touch and gesture with driving their cars, well, here’s news out of the NIssan camp about exactly what i am talking about:

    Nissan Explores Thought Control for Cars
    http://editorial.autos.msn.com/blogs/autosblogpost.aspx?ucsort=4&post=ac6f6463-c6be-451b-a47b-5fd16679d0c8

    Everything is just temporary – especially in this day and age – so enjoy it all while you can… soon enough we’ll all be hardwired into each other and i won’t even have to say the words. “remember when?”‘ because you’ll already know what i’m thinking :-)

  • http://www.grinkot.com/ Boris Grinkot

    There are good reasons why the car has so many physical controls.

    The design may not be ideal, but it’s meant specifically for the task and the form factor: with the driver seated, all four limbs are free and allow the essential driving controls (speed up, slow down, stop, turn) to be distributed for continuous, intuitive operation. Importantly, these controls are designed for incremental kinesthetic feedback: you can feel how hard you break or how sharply you turn.

    I wrote about this with respect to Kindle Touch: http://grinkot.com/2011/10/04/kindle-touch-and-feature-fatigue/

    “Touch” interfaces are really “touch-and-look” because multiple things can be accomplished by the same gesture, so you need [usually] visual feedback to confirm that the one thing that you wanted to do happened, and not something else. Auditory feedback isn’t as good, because you’ll either have to learn a bunch of sounds or all feedback will sound the same, so you’ll only have a confirmation that “something” happened.

    Touch interfaces certainly have their place because they provide great flexibility on multi-purpose devices. But if we’re being really futuristic as the comment above about thought interfaces, why not have variable physical interfaces that take shape depending on the application? ;)