HUI, Not GUI
Some of the most interesting work in understanding touch has been done to compensate for hearing, visual, or tactile impairments.
At Stanford, the TalkingGlove was designed to support individuals with hearing limitations. It recognized American Sign Language finger spelling to generate text on a screen or synthesize speech. This device applied a neural-net algorithm to map the movement of the human hand to an instrumented glove to produce a digital output. It was so successful that it spawned a commercial application in the Virtex Cyberglove, which was later purchased by Immersion and became simply the Cyberglove. Current uses include virtual reality biomechanics and animation.
At Lund University in Sweden, work is being done in providing haptic interfaces for those with impaired vision. Visually impaired computer users have long had access to Braille displays or devices that provide synthesized speech, but these just give text, not graphics, something that can be pretty frustrating for those working in a visual medium like the Web. Haptic interfaces offer an alternative, allowing the user to feel shapes and textures that could approximate a graphical user interface.
At Stanford, this took shape in the 1990s as the "Moose," an experimental haptic mouse that gave new meaning to the terms drag and drop, allowing the user to feel a pull to suggest one and then feel the sudden loss of mass to signify the other. As users approached the edge of a window, they could feel the groove; a check box repelled or attracted, depending on whether it was checked. Some of the time, experimental speech synthesizers were used to "read" the text.
Such research has led to subsequent development of commercial haptic devices, such as the Logitech iFeel Mouse, offering the promise of new avenues into virtual words for the visually impaired.