NSF Grant for translation of ASL to Speech

In news on the “one day we’ll all be able to talk to each other” (even if we still fail to understand each other!) front, Ross Stapleton-Gray sent in this fascinating note about an NSF grant for translating American Sign Language to speech. From the abstract:

This Small Business Innovation Research (SBIR) Phase I research project will demonstrate the feasibility of developing a bio-electronic portable device that translates American Sign Language (ASL), a gestural language that has no written representation, to spoken and written English. The development of such a device implies design and refinement of mechanics and electronics, as well as writing several computer applications to integrate with extant computer programs that train and practice ASL. Development efforts proposed for this project will substantially propel this device toward commercialization. The instrumental part of the research aims to obtain a fully portable gesture capturing system in two versions: wired and [unwired]. The proposed research has three goals: (1) To determine feasibility for two-arm translation, (2) To determine capability to interface with ASL instructional software, and (3) To determine if the electronics can be made robust enough for consumer use. Achievement of these goals will require (a) modification of hardware and software previously developed to handle finger spelling (one hand) to handle ASL translation (two-handed) by consumers, and (b) development of a series of communication protocols and conventions to integrate the ASL
instructional and translation software, which are currently standalone applications.

American Sign Language (ASL) is the native language of many deaf and speech impaired people in the United States and Canada and the second language for relatives and others who provide services to them, making ASL the fourth most widely used language in the U.S. As a gestural language, based on visual principles, it has no
written representation. Despite how pervasively this language is used, there is no automatic device on the market that can translate ASL to spoken or written English (or any other sound-based language) in the same way that there are electronic dictionaries to translate English to other spoken languages. Development of this bio-electronic instrumentation will enable native ASL users to communicate instantaneously with English users for commonplace purposes. It is anticipated that it will have special value to multiply disabled deaf and other disabled (e.g., autistic, mentally retarded, aphasic) individuals for whom acquisition of English is a challenge. This instrumentation also has applications for rehabilitation, gaming, and robotics. The proposed instrumentation overcomes limitations posed by previous inventions that could not interpret palm orientation, an essential component for recognizing distinct signs, by using digital accelerometers mounted on fingers and the back of the palm.

One of the big questions we’ve been asking ourselves at Radar is when the revolution that hit gaming with the Nintendo Wii controller is going to hit other areas of computing, changing forever the way that we interact with our machines. It’s pretty clear that the Minority Report UI is in our future — that and more.

Confirming the idea that hackers are often playing around with this stuff first, Phil Torrone told me the other day that back when he was in advertising, he’d proposed to a client that they build an MP3 player controlled by an accelerometer — just wave it around in patterns in the air to give it commands. He was ahead of his time (and probably ahead of the cost/performance/reliability of the hardware) but the point stands. And I remember when the Mac powerbooks first came with accelerometers. There were immediately lots of accelerometer hacks. Tom Igoe taught an accelerometer class at ITP back in the fall of 2005. But now things are getting serious.

A couple of years back at D, Daniel Simpkins of Hillcrest Labs demoed an amazing TV remote (“the ring”) that used an inertial controller to turn TV-guide into an amazing interactive voyage. He told me recently that they are getting some traction (after a long period where the TV guys were resistant to the new possibilities.)

In some ways, accelerometer-based interfaces are just the tip of the iceberg. With the increasing power of speech recognition (check Nuance’s trajectory, Google’s interest in speech, and the recent acquisition of Tellme by Microsoft, etc.), we’re heading for a number of crossover points, such that it’s not out of the question that within a few years, the idea that you interact with a computer by typing at a keyboard is going to seem quaint.