- Haptic Devices
- Representative Applications of Haptics
- Issues in Haptic Rendering
- Human Factors
- References
Representative Applications of Haptics
Surgical Simulation and Medical Training
A primary application area for haptics has been in surgical simulation and medical training. Langrana, Burdea, Ladeiji, and Dinsmore (1997) used the Rutgers Master II haptic device in a training simulation for palpation of subsurface liver tumors. They modeled tumors as comparatively harder spheres within larger and softer spheres. Realistic reaction forces were returned to the user as the virtual hand encountered the "tumors," and the graphical display showed corresponding tissue deformation produced by the palpation. Finite Element Analysis was used to calculate the reaction forces corresponding to deformation from experimentally obtained force/deflection curves. Researchers at the Universidade Catolica de Brasilia-Brasil (D'Aulignac & Balaniuk, 1999) have produced a physical simulation system providing graphic and haptic interfaces for an echographic examination of the human thigh, using a spring damper model defined from experimental data. Machaco, Moraes and Zuffo (2000) have used haptics in an immersive simulator of bone marrow harvest for transplant. Andrew Mor of the Robotics Institute at Carnegie Mellon (Mor, 1998) employed the PHANToM in conjunction with a 2 DOF planar device in an arthroscopic surgery simulation. The new device generates a moment measured about the tip of a surgical tool, thus providing more realistic training for the kinds of unintentional contacts with ligaments and fibrous membranes that an inexperienced resident might encounter. At Stanford, Balaniuk and Costa (2000) have developed a method to simulate fluid-filled objects suitable for interactive deformation by "cutting," "suturing," and so on. At MIT, De and Srinivasan (1998) have developed models and algorithms for reducing the computational load required to generate visual rendering of organ motion and deformation and the communication of forces back to the user resulting from tool-tissue contact. They model soft tissue as thin-walled membranes filled with fluid. Force-displacement response is comparable to that obtained in in vivo experiments. At Berkeley, Sastry and his colleagues (Chapter 13, this volume) are engaged in a joint project with the surgery department of the University of California at San Francisco and the Endorobotics Corporation to build dexterous robots for use inside laparoscopic and endoscopic cannulas, as well as tactile sensing and teletactile display devices and masters for surgical teleoperation (2001). Aviles and Ranta of Novint Technologies have developed the Virtual Reality Dental Training System dental simulator (Aviles & Ranta, 1999). They employ a PHANToM with four tips that mimic dental instruments; they can be used to explore simulated materials like hard tooth enamel or dentin. Giess, Evers, and Meinzer (1998) integrated haptic volume rendering with the PHANToM into the presurgical process of classifying liver parenchyma, vessel trees, and tumors. Surgeons at the Pennsylvania State University School of Medicine, in collaboration with Cambridge-based Boston Dynamics, used two PHANToMs in a training simulation in which residents passed simulated needles through blood vessels, allowing them to collect baseline data on the surgical skill of new trainees. Iwata, Yano, and Hashimoto (1998) report the development of a surgical simulator with a "free form tissue" which can be "cut" like real tissue. There are few accounts of any systematic testing and evaluation of the simulators described above. Gruener (1998), in one of the few research reports with hard data, expresses reservations about the potential of haptics in medical applications; he found that subjects in a telementoring session did not profit from the addition of force feedback to remote ultrasound diagnosis.
Museum Display
Although it is not yet commonplace, a few museums are exploring methods for 3D digitization of priceless artifacts and objects from their sculpture and decorative arts collections, making the images available via CD-ROM or in-house kiosks. For example, the Canadian Museum of Civilization collaborated with Ontario-based Hymarc to use the latter's ColorScan 3D laser camera to create three-dimensional models of objects from the museum's collection (Canarie, Inc., 1998; Shulman, 1998). A similar partnership was formed between the Smithsonian Institution and Synthonic Technologies, a Los Angeles-area company. At Florida State University, the Department of Classics has worked with a team to digitize Etruscan artifacts using the RealScan 3D imaging system from Real 3D (Orlando, Florida), and art historians from Temple University have collaborated with researchers from the Watson Research Laboratory's visual and geometric computing group to create a model of Michaelangelo's Pieta, using the Virtuoso shape camera from Visual Interface (Shulman, 1998).
Few museums have yet explored the potential of haptics to allow visitors access to three-dimensional museum objects such as sculpture, bronzes, or examples from the decorative arts. The "hands-off" policies that museums must impose limit appreciation of three-dimensional objects, where full comprehension and understanding rely on the sense of touch as well as vision. Haptic interfaces can allow fuller appreciation of three-dimensional objects without jeopardizing conservation standards, giving museums, research institutes, and other conservators of priceless objects a way to provide the public with a vehicle for object exploration in a modality that could not otherwise be permitted (McLaughlin, Goldberg, Ellison, & Lucas, 1999). At the University of Southern California, researchers at the Integrated Media Systems Center (IMSC) have digitized daguerreotype cases from the collection of the Seaver Center for Western Culture at the Natural History Museum of Los Angeles County and made them available at a PHANToM-equipped kiosk alongside an exhibition of the "real" objects (see Chapter 15, this volume). Bergamasco, Jannson and colleagues (Jansson, 2001) are undertaking a "Museum of Pure Form"; their group will acquire selected sculptures from the collections of partner museums in a network of European cultural institutions to create a digital database of works of art for haptic exploration.
Haptics raises the prospect of offering museum visitors not only the opportunity to examine and manipulate digitized three-dimensional art objects visually, but also to interact remotely, in real time, with museum staff members to engage in joint tactile exploration of the works of art such that someone from the museum's curatorial staff can interact with a student in a remote classroom and together they can jointly examine an ancient pot or bronze figure, note its interesting contours and textures, and consider such questions as "What is the mark at the base of the pot?" or "Why does this side have such jagged edges?" (Hespanha, Sukhatme, McLaughlin, Akbarian, Garg, & Zhu, 2000; McLaughlin, Sukhatme, Hespanha, Shahabi, Ortega, & Medioni, 2000; Sukhatme, Hespanha, McLaughlin, Shahabi, & Ortega, 2000).
Painting, Sculpting, and CAD
There have been a few projects in which haptic displays are used as alternative input devices for painting, sculpting, and computer-assisted design (CAD). Dillon and colleagues (Dillon, Moody, Bartlett, Scully, Morgan, & James, 2000) are developing a "fabric language" to analyze the tactile properties of fabrics as an information resource for haptic fabric sensing. At CERTEC, the Center of Rehabilitation Engineering in Lund, Sweden, Sjostrom (Sjostrom, 1997) and his colleagues have created a painting application in which the PHANToM can be used by the visually impaired; line thickness varies with the user's force on the fingertip thimble and colors are discriminated by their tactual profile. At Dartmouth, Henle and Donald (1999) developed an application in which animations are treated as palpable vector fields that can be edited by manipulation with the PHANToM. Marcy, Temkin, Gorman, and Krummel (1998) have developed the Tactile Max, a PHANToM plug-in for 3D Studio Max. Dynasculpt, a prototype from Interval Research Corporation (Snibbe, Anderson, & Verplank, 1998) permits sculpting in three dimensions by attaching a virtual mass to the PHANToM position and constructing a ribbon through the path of the mass through the 3D space. Gutierrez, Barbero, Aizpitarte, Carrillo, and Eguidazu (1998) have integrated the PHANToM into DATum, a geometric modeler. Objects can be touched, moved, or grasped (with two PHANToMs), and the assembly/disassembly of mechanical objects can be simulated.
Visualization
Haptics has also been incorporated into scientific visualization. Durbeck, Macias, Weinstein, Johnson, and Hollerbach (1998) have interfaced SCIrun, a computation software steering system, to the PHANToM. Both haptics and graphics displays are directed by the movement of the PHANToM stylus through haptically rendered data volumes. Similar systems have been developed for geoscientific applications (e.g., the Haptic Workbench, Veldkamp, Truner, Gunn, and Stevenson, 1998). Green and Salisbury (1998) have produced a convincing soil simulation in which they have varied parameters such as soil properties, plow blade geometry, and angle of attack. Researchers at West Virginia University (Van Scoy, Baker, Gingold, Martino, & Burton, 1999) have applied haptics to mobility training. They designed an application in which a real city block and its buildings could be explored with the PHANToM, using models of the buildings created in CANOMA from digital photographs of the scene from the streets. At Interactive Simulations, a San Diego-based company, researchers have added a haptic feedback component to Sculpt, a program for analyzing chemical and biological molecular structures, which will permit analysis of molecular conformational flexibility and interactive docking. At the University of North Carolina, Chapel Hill (Chapter 5, this volume), 6 DOF PHANToMs have been used for haptic rendering of high-dimensional scientific datasets, including three-dimensional force fields and tetrahedralized human head volume datasets. We consider further applications of haptics to visualization below, in the section "Assistive Technology for the Blind and Visually Impaired."
Military Applications
Haptics has also been used in aerospace and military training and simulations. There are a number of circumstances in a military context in which haptics can provide a useful substitute information source; that is, there are circumstances in which the modality of touch could convey information that for one reason or another is not available, not reliably communicated, nor even best apprehended through the modalities of sound and vision. In some cases, combatants may have their view blocked or may not be able to divert attention from a display to attend to other information sources. Battlefield conditions, such as the presence of artillery fire or smoke, might make it difficult to hear or see. Conditions might necessitate that communications be inaudible (Transdimension, 2000). For certain applications, for example where terrain or texture information needs to be conveyed, haptics may be the most efficient communication channel. In circumstances like those described above, haptics is an alternative modality to sound and vision that can be exploited to provide low-bandwidth situation information, commands, and threat warning (Transdimension, 2000). In other circumstances haptics could function as a supplemental information source to sound or vision. For example, users can be alerted haptically to interesting portions of a military simulation, learning quickly and intuitively about objects, their motions, what persons may interact with them, and so on.
At the Army's National Automotive Center, the SimTLC (Simulation Throughout the Life Cycle) program has used VR techniques to test military ground vehicles under simulated battlefield conditions. One of the applications has been a simulation of a distributed environment where workers at remote locations can collaborate in reconfiguring a single vehicle chassis with different weapons components, using instrumented force-feedback gloves to manipulate the three-dimensional components (National Automotive Center, 1999). The SIRE simulator (Synthesized Immersion Research Environment) at the Air Force Research Laboratory, Wright-Patterson Air Force Base, incorporated data gloves and tactile displays into its program of development and testing of crew station technologies (Wright-Patterson Air Force Base, 1997). Using tasks such as mechanical assembly, researchers at NASA-Ames have been conducting psychophysical studies of the effects of adding a 3 DOF force-feedback manipulandum to a visual display, noting that control and system dynamics have received ample research attention but that the human factors underlying successful haptic display in simulated environments remain to be identified (Ellis & Adelstein, n.d.). The Naval Aerospace Medical Research Laboratory has developed a "Tactile Situation Awareness System" for providing accurate orientation information in land, sea, and aerospace environments. One application of the system is to alleviate problems related to the spatial disorientation that occurs when a pilot incorrectly perceives the attitude, altitude, or motion of his aircraft; some of this error may be attributable to momentary distraction, reduced visibility, or an increased workload. Because the system (a vibrotactile transducer) can be attached to a portable sensor, it can also be used in such applications as extravehicular space exploration activity or Special Forces operations. Among the benefits claimed for integration of haptics with audio and visual displays are increased situation awareness, the ability to track targets and information sources spatially, and silent communication under conditions where sound is not possible or desirable (e.g., hostile environments) (Naval Aerospace Medical Research Laboratory, 2000).
Interaction Techniques
An obvious application of haptics is to the user interface, in particular its repertoire of interaction techniques, loosely considered that set of procedures by which basic tasks, such as opening and closing windows, scrolling, and selecting from a menu, are performed (Kirkpatrick & Douglas, 1999). Indeed, interaction techniques have been a popular application area for 2D haptic mice like the Wingman and I-Feel, which work with the Windows interface to add force feedback to windows, scroll bars, and the like. For some of these force-feedback mice, shapes, textures, and other properties of objects (spring, damping) can be "rendered" with Javascript and the objects delivered for exploration with the haptic mice via standard Web pages. Haptics offers a natural user interface based on the human gestural system. The resistance and friction provided by stylus-based force feedback adds an intuitive feel to such everyday tasks as dragging, sliding levers, and depressing buttons. There are more complex operations, such as concatenating or editing, for which a grasping metaphor may be appropriate. Here the whole-hand force feedback provided by glove-based devices could convey the feeling of stacking or juxtaposing several objects or of plucking an unwanted element from a single object. The inclusion of palpable physics in virtual environments, such as the constraints imposed by walls or the effect of altered gravity on weight, may enhance the success of a user's interaction with the environment (Adelstein & Ellis, 2000).
Sometimes too much freedom to move is inefficient and has users going down wrong paths and making unnecessary errors that system designers could help them avoid by the appropriate use of built-in force constraints that encourage or require the user to do things in the "right" way (Hutchins & Gunn, 1999). Haptics can also be used to constrain the user's interaction with screen elements, for example, by steering him or her away from unproductive areas for the performance of specific tasks, or making it more difficult to trigger procedures accidentally by increasing the stiffness of the controls.
Assistive Technology for the Blind and Visually Impaired
Most haptic systems still rely heavily on a combined visual/haptic interface. This dual modality is very forgiving in terms of the quality of the haptic rendering. This is because ordinarily the user is able to see the object being touched and naturally persuades herself that the force feedback coming from the haptic device closely matches the visual input. However, in most current haptic interfaces, the quality of haptic rendering is actually poor and, if the user closes her eyes, she will only be able to distinguish between very simple shapes (such as balls, cubes, etc.).
To date there has been a modest amount of work on the use of machine haptics for the blind and visually impaired. Among the two-dimensional haptic devices potentially useful in this context, the most recent are the Moose, the Wingman, the I-Feel, and the Sidewinder. The Moose, a 2D haptic interface developed at Stanford (O'Modhrain & Gillespie, 1998), reinterprets a Windows screen with force feedback such that icons, scroll bars, and other screen elements like the edges of windows are rendered haptically, providing an alternative to the conventional graphical user interface (GUI). For example, drag-and-drop operations are realized by increasing or decreasing the apparent mass of the Moose's manipulandum. Although not designed specifically with blind users in mind, the Logitech Wingman, developed by Immersion Corporation and formerly known as the "FEELit" mouse, similarly renders the Windows screen haptically in two dimensions and works with the Web as well, allowing the user to "snap to" hyperlinks or feel the "texture" of a textile using a "FeeltheWeb" ActiveX control programmed through Javascript. (The Wingman mouse is now no longer commercially available). Swedish researchers have experimented, with mixed results, with two-dimensional haptic devices like the Microsoft Sidewinder joystick in games devised for the visually impaired, such as "Labyrinth," in which users negotiate a maze using force feedback (Johansson & Linde, 1998, 1999).
Among the three-dimensional haptic devices, Immersion's Impulse Engine 3000 has been shown to be an effective display system for blind users. Colwell et al. (1998) had blind and sighted subjects make magnitude estimations of the roughness of virtual textures using the Impulse Engine and found that the blind subjects were more discriminating with respect to the roughness of texture and had different mental maps of the location of the haptic probe relative to the virtual object than sighted users. The researchers found, however, that for complex virtual objects, such as models of sofas and chairs, haptic information was simply not sufficient to produce recognition and had to be supplemented with information from other sources for all users.
Most of the recent work in 3D haptics for the blind has tended to focus on SensAble's PHANToM. At CERTEC, the Center of Rehabilitation Engineering in Lund, Sweden, in addition to Sjöstrom's painting application, described earlier (Sjöstrom, 1997), a program has been developed for "feeling" mathematical curves and surfaces, and a variant of the game "Battleship" that uses force feedback to communicate the different sensations of the "water surface" as bombs are dropped and opponents are sunk. The game is one of the few that can also be enjoyed by deaf-blind children. Blind but hearing children may play "The Memory Game," a variation on "Concentration" based on sound-pair buttons that disappear tactually when a match is made (Rassmuss-Gröhn & Sjöstrom, 1998).
Jansson and his colleagues at Uppsala University in Sweden have been at the forefront of research on haptics for the blind (Jannson, 1998; Jansson & Billberger, 1999; Jansson, Faenger, Konig, & Billberger, 1998). Representive of this work is an experiment reported in Jansson and Billberger (1999), in which blindfolded subjects were evaluated for speed and accuracy in identifying virtual objects (cubes, spheres, cylinders, and cones) with the PHANToM and corresponding physical models of the virtual objects by hand exploration. Jansson and Billberger found that both speed and accuracy in shape identification were significantly poorer for the virtual objects. Speed in particular was affected by virtue of the fact that the exploratory procedures most natural to shape identification, grasping and manipulating with both hands, could not be emulated by the single-point contact of the PHANToM tip. They also noted that subject performance was not affected by the type of PHANToM interface (thimble versus stylus). However, shape recognition of virtual objects with the PHANToM was significantly influenced by the size of the object, with larger objects being more readily identified. The authors noted that shape identification with the PHANToM is a considerably more difficult task than texture recognition, in that in the case of the latter a single lateral sweep of the tip in one direction may be sufficient, but more complex procedures are required to apprehend shape. In Chapter 9 of this volume Jansson reports on his work with nonrealistic haptic rendering and with the method of successive presentation of increasingly complex scenes for haptic perception when visual guidance is unavailable.
Multivis (Multimodal Visualization for Blind People) is a project currently being undertaken at the University of Glasgow, which will utilize force feedback, 3D sound rendering, braille, and speech input and output to provide blind users access to complex visual displays. Yu, Ramloll, and Brewster (2000) have developed a multimodal approach to providing blind users access to complex graphical data such as line graphs and bar charts. Among their techniques are the use of "haptic gridlines" to help users locate data values on the graphs. Different lines are distinguished by applying two levels of surface friction to them ("sticky" or "slippery"). Because these features have not been found to be uniformly helpful to blind users, a toggle feature was added so that the gridlines and surface friction could be turned on and off. Subjects in their studies had to use the PHANToM to estimate the x and y coordinates of the minimum and maximum points on two lines. Both blind and sighted subjects were effective at distinguishing lines by their surface friction. Gridlines, however, were sometimes confused with the other lines, and counting the gridlines from right and left margins was a tedious process prone to error. The authors recommended, based on their observations, that lines on a graph should be modeled as grooved rather than raised ("engraving" rather than "embossing"), as the PHANToM tip "slips off" the raised surface of the line.
Ramloll, Yu, and their colleagues (2000) note that previous work on alternatives to graphical visualization indicates that for blind persons, pitch is an effective indicator of the location of a point with respect to an axis. Spatial audio is used to assist the user in tasks such as detecting the current location of the PHANToM tip relative to the origin of a curve (Ramloll, Yu, et al., 2000). Pitches corresponding to the coordinates of the axes can be played in rapid succession to give an "overview" picture of the shape of the curve. Such global information is useful in gaining a quick overall orientation to the graph that purely local information can provide only slowly, over time. Ramloll et al. also recommend a guided haptic overview of the borders, axes, and curvesfor example, at intersections of axes, applying a force in the current direction of motion along a curve to make sure that the user does not go off in the wrong direction.
Other researchers working in the area of joint haptic-sonification techniques for visualization for the blind include Grabowski and Barner (Grabowski, 1999; Grabowski & Barner, 1998). In this work, auditory feedbackphysically modeled impact soundis integrated with the PHANToM interface. For instance, sound and haptics are integrated such that a virtual object will produce an appropriate sound when struck. The sound varies depending on such factors as the energy of the impact, its location, and the user's distance from the object (Grabowski, 1999).