System Control
A focus on different system control techniques that can be used to change application state, issue commands and provide overall input to a 3D application.
On a desktop computer, we input text and commands using 2D UI elements such as pull-down menus, pop-up menus, or toolbars on an everyday basis. These elements are examples of system control techniques—they enable us to send commands to an application, change a mode, or modify a parameter. While we often take the design of such techniques in 2D UIs for granted, system control and symbolic input are not trivial in 3D applications. Simply adapting 2D desktop-based widgets is not the always the best solution. In this chapter, we discuss and compare various system control and symbolic input solutions for 3D UIs.
9.1 Introduction
The issuing of commands is a critical way to access any computer system’s functionality. For example, with traditional desktop computers we may want to save a document or change from a brush tool to an eraser tool in a painting application. In order to perform such tasks, we use system control techniques like menus or function keys on a keyboard. Designers of desktop and touch-based system control interfaces have developed a plethora of widely used and well-understood graphical user interface (GUI) techniques, such as those used in the WIMP (Windows, Icons, Menus, Point and Click) metaphor (Preece et al. 2002). While quite a few techniques exist, designing a 3D UI to perform system control can be challenging. In this chapter, we will provide an overview of system control methods, and review their advantages and disadvantages.
Although much of the real work in a 3D application consists of interaction tasks like selection and manipulation, system control is critical because it is the glue that lets the user control the interaction flow between the other key tasks in an application. In many tasks, system control is intertwined with symbolic input, the input of characters and numbers. For example, users may need to enter a filename to save their work or specify a numeric parameter for a scaling command. In this chapter, we will focus on system control and symbolic input concurrently instead of handling these tasks separately.
To be sure, 2D and 3D applications differ with respect to symbolic input. For example, in writing this book with a word processor, the core activity is symbolic input, accomplished by typing on a keyboard. This activity is interspersed with many small system control tasks—saving the current document by clicking on a button, inserting a picture by choosing an item from a menu, or underlining a piece of text by using a keyboard shortcut, just to name a few. Yet, within most 3D applications the focus is the opposite: users only input text and numbers occasionally, and the text and numbers entered usually consist of short strings. While this may change in the future with more effective techniques, the current state of affairs is centered on limited symbolic input to support system control tasks.
We can define system control as the user task in which commands are issued to
request the system to perform a particular function,
change the mode of interaction, or
change the system state.
The key word in this definition is command. In selection, manipulation, and travel tasks, the user typically specifies not only what should be done, but also how it should be done, more or less directly controlling the action. In system control tasks, the user typically specifies only what should be done and leaves it up to the system to determine the details.
In this chapter we consider system control to be an explicit instead of an implicit action. Interfaces from other domains have used methods to observe user behavior to automatically adjust the mode of a system (e.g., Celentano and Pittarello 2004; Li and Hsu 2004), but we will not focus on this breed of interfaces.
In 2D interfaces, system control is supported by the use of a specific interaction style, such as pull-down menus, text-based command lines, or tool palettes (Preece et al. 2002). Many of these interaction styles have also been adapted to 3D UIs to provide for a range of system control elements (see section 9.4), which may be highly suitable for desktop-based 3D UIs. 2D methods may also be appropriate in handheld AR applications, where the application often also relies on screen-based (touch) input. But for immersive applications in particular, WIMP-style interaction may not always be effective. We cannot assume that simply transferring conventional interaction styles will lead to high usability.
In immersive VR, users have to deal with 6-DOF input as opposed to 2-DOF on the desktop. These differences create new problems but also new possibilities for system control. In 3D UIs it may be more appropriate to use nonconventional system control techniques (Bullinger et al. 1997). These system control methods may be combined with traditional 2D methods to form hybrid interaction techniques. We will talk about the potential and implications of merging 2D and 3D techniques at several stages throughout this chapter.
9.1.1 Chapter Roadmap
Before describing specific techniques, we will consider two categories of factors that influence the effectiveness of all techniques: human factors and system factors. We then present a classification of system control techniques for 3D UIs (section 9.3). Next, we describe each of the major categories in this classification (sections 9.4–9.8). In each of these sections, we describe representative techniques, discuss the relevant design and implementation issues, discuss specific symbolic input issues, and provide guidance on the practical application of the techniques. In section 9.9, we cover multimodal system control techniques, which combine multiple methods of input to improve usability and performance. We conclude the chapter, as usual, with general design guidelines and system control considerations in our two case studies.