9.6 Voice Commands
The issuing of voice commands can be performed via simple speech recognition or by means of spoken dialogue techniques. Speech recognition techniques are typically used for issuing single commands to the system, while a spoken dialogue technique is focused on promoting discourse between the user and the system.
9.6.1 Techniques
A spoken dialogue system provides an interface between a user and a computer-based application that permits spoken interaction with the application in a relatively natural manner (McTear 2002; Jurafsky and Martin 2008).
The most critical component of a spoken dialogue system (and of simple speech recognition techniques) is the speech recognition engine. A wide range of factors may influence the speech recognition rate, such as variability among speakers and background noise. The recognition engine can be speaker-dependent, requiring initial training of the system, but most are speaker-independent, which normally do not require training. Systems also differ in the size of their vocabulary. The response generated as output to the user can confirm that an action has been performed or inform the user that more input is needed to complete a control command. In a spoken dialogue system, the response should be adapted to the flow of discourse (requiring a dialogue control mechanism) and generated as artificial speech.
Today’s voice recognition systems are advanced and widespread. In particular, phones use voice commands and spoken dialogue. Therefore, as such, phones could be leveraged to allow for voice control in immersive VR and head-worn and handheld AR systems.
Many 3D UIs that use speech recognition also include other complementary input methods (Billinghurst 1998). These techniques are labeled multimodal and are discussed in section 9.8.
9.6.2 Design and Implementation Issues
The development of a 3D UI using speech recognition or spoken dialogue systems involves many factors. One should start by defining which tasks need to be performed via voice interfaces. For an application with a limited number of functions, a normal speech recognition system will probably work well. The task will define the vocabulary size of the speech engine—the more complex the task and the domain in which it is performed, the more likely the vocabulary size will increase. Highly complex applications may need conversational UIs via a spoken dialogue system in order to ensure that the full functionality of voice input is accessible. In the case of a spoken dialogue system, the design process should also consider what vocal information the user needs to provide in order for the system to determine the user’s intentions.
Developers should be aware that voice interfaces are invisible to the user. The user is normally not presented with an overview of the functions that can be performed via a speech interface. In order to grasp the actual intentions of the user, one of the key factors is verification. Either by error correction via semantic and syntactic filtering (prediction methods that use the semantics or syntax of a sentence to limit the possible interpretation) or by a formal discourse model (question-and-answer mechanism), the system must ensure that it understands what the user wants.
Unlike other system control techniques, speech-based techniques initialize, select, and issue a command all at once. Sometimes another input stream (like a button press) or a specific voice command should be used to initialize the speech system. This disambiguates the start of a voice input and is called a push-to-talk system (see also Chapter 6, “3D User Interface Input Hardware,” section 6.4.1). Error rates will increase when the application involves direct communication between multiple participants. For instance, a comment to a colleague can easily be misunderstood as a voice command to a system. Therefore, one may need to separate human communication and human–computer interaction when designing speech interfaces. Syntactic differences between personal communication and system interaction might be used to distinguish between voice streams (Shneiderman 2000).
9.6.3 Practical Application
Speech input can be a very powerful system control technique in a 3D UI; it is hands-free and natural. The user may first need to learn the voice commands—which is easy enough for a smaller functional set—before they can be issued. However, most of today’s systems are powerful enough to understand complete sentences without learning. Moreover, most of today’s phones already include a powerful speech recognition system that can be readily used. As such, for hybrid interfaces or handheld AR, voice recognition may be a good option. Speech is also well suited for symbolic input, as speech can be directly translated into text. As such, it is a lightweight method to dictate text or numbers. Also, as we will talk about in section 9.9, voice can be combined with other system control techniques to form a multimodal input stream to the computer.
In those domains where users need to rely on both hands to perform their main interaction tasks (e.g., a medical operating room), voice might be a useful system control asset. The doctor can keep using his hands, and since there is nothing to touch, the environment can be kept sterile. Still, continuous voice input is tiring and cannot be used in every environment.
Voice interface issues have been studied in many different contexts. For example, using speech commands for controlling a system via a telephone poses many of the same problems as using voice commands in a 3D UI. Please refer to Brewster (1998) for further discussion of issues involved in such communication streams.