9.5 Graphical Menus
Graphical menus for 3D UIs are the 3D equivalent of the 2D menus that have proven to be a successful system control technique in desktop UIs. Because of their success and familiarity to users, many developers have chosen to experiment with graphical menus for 3D UIs. However, the design of graphical menus for 3D UIs comes with some unique challenges.
9.5.1 Techniques
Graphical menus used in 3D UIs can be subdivided into three categories:
adapted 2D menus
1-DOF menus
3D widgets
Adapted 2D Menus
Menus that are simple adaptations of their 2D counterparts have, for obvious reasons, been the most popular group of 3D system control techniques. Adapted 2D menus basically function in the same way as they do on the desktop. Some examples of adapted 2D menus are pull-down menus, pop-up menus, floating menus, and toolbars. These menus are a common choice for more complex sets of functions. Menus are well suited for providing good structure for larger numbers of functions, and most users are familiar with the underlying principles (interaction style) of controlling a menu. On the other hand, these menus can occlude the environment, and users may have trouble finding the menu or selecting items using a 3D selection technique.
Figure 9.3 shows an example of an adapted 2D menu used in a Virtual Museum application in a surround-screen display. It allows a user to plan an exhibition by finding and selecting images of artwork. The menu is semitransparent to reduce occlusion of the 3D environment. Another example can be seen in Figure 9.4, showing a pie menu in a virtual environment. Pie menus can often be combined with marking-menu techniques (see section 9.7; Gebhardt et al. 2010)
Figure 9.3 A floating menu in the Virtual Museum application. (Photograph courtesy of Gerhard Eckel, Fraunhofer IMK)
Figure 9.4 Pie menu in immersive virtual environment. (Photograph courtesy of Thorsten Kuhlen, Virtual Reality & Immersive Visualization Group, RWTH Aachen)
There are numerous ways of adapting 2D menus by tuning aspects such as placement or input technique. For example, one adaptation of 2D menus that has been successful in 3D UIs is to attach the menus to the user’s head. This way, the menu is always accessible, no matter where the user is looking. On the other hand, head-coupled menus can occlude the environment and potentially reduce the sense of presence.
Another method is attaching a menu to the user’s hand in a 3D UI, assigning menu items to different fingers. For example, Pinch Gloves (see Chapter 6, Figure 6.25), can be used to interpret a pinch between a finger and the thumb on the same hand as a menu selection. An example of a finger-driven menu in AR is depicted in Figure 9.5 (Piekarski and Thomas 2003). Using Pinch Gloves, a typical approach is to use the nondominant hand to select a menu and the dominant hand to select an item within the menu. However, in many applications there will be more options than simple finger mapping can handle. The TULIP (Three-Up, Labels In Palm) technique (Bowman and Wingrave 2001) was designed to address this problem by letting users access three menu items at a time and using the fourth finger to switch to a new set of three items.
Figure 9.5 TINMITH menu using Pinch Gloves. (Adapted from Piekarski and Thomas 2003)
Another powerful technique is to attach the menu to a physical surface, which could be not just a phone or a tablet but also any other kind of surface. Figure 9.6 shows an example. These devices are often tracked. Finding the menu is then as easy as bringing the physical tablet into view. The physical surface of the tablet also helps the user to select the menu items, and the menu can easily be put away as well. However, the structure and flow of action may change considerably if the tablet is used for menus while a different input device is used for primary tasks.
Figure 9.6 Tablet control of interactive visualizations layered on top of surround-view imagery in the UCSB Allosphere. Photograph shows left-eye view of stereo content viewed and controlled by the user. (Image courtesy of Donghao Ren and Tobias Höllerer)
1-DOF Menus
Selection of an item from a menu is essentially a one-dimensional operation. This observation led to the development of 1-DOF menus. A 1-DOF menu is often attached to the user’s hand, with the menu items arranged in a circular pattern around it; this design led to the name ring menu (Liang and Green 1994; Shaw and Green 1994). With a ring menu, the user rotates his hand until the desired item falls within a “selection basket.” Of course, the hand rotation or movement can also be mapped onto a linear menu, but a circular menu matches well with the mental expectation of rotation. The performance of a ring menu depends on the physical movement of the hand and wrist, and the primary axis of rotation should be carefully chosen.
Hand rotation is not the only possible way to select an item in a 1-DOF ring menu. The user could also rotate the desired item into position with the use of a button or buttons on the input device: a dial on a joystick is one example of how this could be achieved. Another method is using tangible tiles, such as those used in the tangible skin cube (Figure 9.7, Lee and Woo 2010). 1-DOF menus can also be used eyes-off by coupling the rotational motion of the wrist to an audio-based menu. These kinds of techniques have also been used in wearable devices (Kajastila and Lokki 2009; Brewster et al. 2003).
Figure 9.7 Ring menu implemented with tangible skin cubes. (Adapted from Lee and Woo 2010)
Handheld widgets are another type of 1-DOF menu that, instead of using rotation, use relative hand position (Mine et al. 1997). By moving the hands closer together or further apart, different items in the menu can be selected.
In general, 1-DOF menus are quite easy to use. Menu items can be selected quickly, as long as the number of items is relatively small and ergonomic constraints are considered. Because of the strong placement cue, 1-DOF menus also afford rapid access and use. The user does not have to find the menu if it is attached to his hand and does not have to switch his focus away from the area in which he is performing actions.
3D Widgets
The most exotic group of graphical menu techniques for system control is 3D widgets. They take advantage of the extra DOF available in a 3D environment to enable more complex menu structures or better visual affordances for menu entries. We distinguish between two kinds of 3D widgets: collocated (context-sensitive) and non-context-sensitive widgets.
With collocated widgets, the functionality of a menu is moved onto an object in the 3D environment, and geometry and functionality are strongly coupled. Conner and colleagues (1992) refer to widgets as “the combination of geometry and behavior.” For example, suppose a user wishes to manipulate a simple geometric object like a box. We could design an interface in which the user first chooses a manipulation mode (e.g., translation, scaling, or rotation) from a menu and then manipulates the box directly. With collocated 3D widgets, however, we can place the menu items directly on the box—menu functionality is directly connected to the object (Figure 9.8). To scale the box, the user simply selects and moves the scaling widget, thus combining the mode selection and the manipulation into a single step. The widgets are context-sensitive; only those widgets that apply to an object appear when the object is selected. As in the example, collocated widgets are typically used for changing geometric parameters and are also often found in desktop modeling applications.
Figure 9.8 A 3D collocated widget for scaling an object. (Image courtesy of Andrew Forsberg, Brown University Computer Graphics Group)
The command and control cube, or C3 (Grosjean et al. 2002), is a more general-purpose type of 3D widget (non-context-sensitive). The C3 (Figure 9.9) is a 3 × 3 × 3 cubic grid, where each of the 26 grid cubes is a menu item, while the center cube is the starting point. The user brings up the menu by pressing a button or making a pinch on a Pinch Glove; the menu appears, centered on the user’s hand. Then the user moves his hand in the direction of the desired menu item cube relative to the center position and releases the button or the pinch. This is similar in concept to the marking menus (Kurtenbach and Buxton 1991) used in software such as Maya from Autodesk.
Figure 9.9 The command and control cube. (i3D-INRIA. Data © Renault. Photograph courtesy of Jerome Grosjean)
9.5.2 Design and Implementation Issues
There are many considerations when designing or implementing graphical menus as system control techniques in a 3D UI. In this section we will discuss the main issues that relate to placement, selection, representation, and structure.
Placement
The placement of the menu influences the user’s ability to access the menu (good placement provides a spatial reference) and the amount of occlusion of the environment. We can consider menus that are world-referenced, object-referenced, head-referenced, body-referenced, or device-referenced (adapted from the classification in Feiner et al. 1993).
World-referenced menus are placed at a fixed location in the virtual world, while object-referenced menus are attached to an object in the 3D scene. Although not useful for most general-purpose menus, these may be useful as collocated 3D widgets. Head-referenced or body-referenced menus provide a strong spatial reference frame: the user can easily find the menu. Mine et al. (1997) explored body-referenced menus and found that the user’s proprioceptive sense (sense of the relative locations of the parts of the body in space) can significantly enhance menu retrieval and usage. Body-referenced menus may even enable eyes-off usage, allowing users to perform system control tasks without having to look at the menu. Head- and body-referenced menus are mostly used in VR—while they may be applied in AR too, for example in an HWD setup, this is done infrequently. The last reference frame is the group of device-referenced menus. For instance, on a workbench, display menus may be placed on the border of the display device. The display screen provides a physical surface for menu selection as well as a strong spatial reference. Handheld AR applications often make use of device-referenced designs: while the user moves the “window on the world” around, the menus stay fixed on the display plane.
Handheld AR systems often use hybrid interfaces, in which 2D and 3D interaction methods are used in concert to interact with spatial content. Both due to familiarity and the fact that 2D techniques (such as menus) are frequently optimized for smaller screens, these methods are often an interesting option. Just because an application contains 3D content does not mean that all UI elements should be 3D. Nonetheless, hybrid interaction may introduce some limitations, such as device or context switching.
Non-collocated menus also result in focus switching, since menus are often displayed at a different location than the main user task. This problem can be exacerbated when menus are deliberately moved aside to avoid occlusion and clutter in an environment. Occlusion and clutter are serious issues; when a menu is activated, the content in the main part of the interaction space may become invisible. It is often hard to balance the issues of placement, occlusion, and focus switching. An evaluation may be needed to make these design decisions for specific systems.
Menu usage in AR may be even more challenging—with HWDs, the screen real estate can be limited by a narrow FOV. An even bigger problem is occlusion: even when handheld AR displays offer a wide FOV, placement of menus can still clutter the space and occlude the environment. Thus, AR system controls often need to be hidden after usage to free up the visual space.
Selection
Traditionally, desktop menus make use of a 2D selection method (mouse-based). In a 3D UI, we encounter the problem of using a 3D selection method with these 2D (or even 1D) menus. This can make system control particularly difficult. In order to address this problem, several alternative selection methods have been developed that constrain the DOF of the system control interface, considerably improving performance. For example, when an adapted 2D menu is shown, one can discard all tracker data except the 2D projection of the tracker on the plane of the menu. 2-DOF selection techniques such as image-plane selection also address this issue (see Chapter 7, “Selection and Manipulation”). Still, selection will never be as easy as with an inherently 2D input method. Alternatively, the menu can be placed on a physical 2D surface to reduce the DOF of the selection task, or a phone or a tablet can be used for menus.
Representation and Structure
Another important issue in developing a graphical menu is its representation: how are the items represented visually, and if there are many items, how are they structured?
The size of and space between items is very important. Do not make items and inter-item distances too small, or the user might have problems selecting the items, especially since tracking errors may exacerbate selection problems.
Application complexity is often directly related to the number of functions. Make sure to structure the interface by using either functional grouping (items with similar function are clustered) or sequential grouping (using the natural sequence of operations to structure items). Alternatively, one could consider using context-sensitive menus to display only the applicable functions.
Control coding can give an extra cue about the relations between different items and therefore make the structure and the hierarchy of the items clearer (Bullinger et al. 1997). Methods include varying colors, shapes, surfaces, textures, dimensions, positions, text, and symbols to differentiate items.
Finally, AR applications used in outdoor environments will be limited by visibility issues (Kruijff et al. 2010): both the bright conditions and limits in screen brightness affect visibility and legibility of menus. Color and size should thus be chosen carefully. Often it might help to use more saturated colors and larger sizes to increase visibility and legibility (Gabbard et al. 2007; see Figure 9.10).
Figure 9.10 Legibility with different backgrounds in augmented reality. This mockup image shows several frequently occurring issues: the leader line of label 1 (right) is difficult to see due to the light background, whereas label 2 (left) can be overlooked due to the low color contrast with the blue sign behind and above it. Label 3 can barely be read, as the label does not have a background color. While this reduces occlusion, in this case legibility is low due to the low contrast and pattern interferences of the background. (Image courtesy of Ernst Kruijff)
9.5.3 Practical Application
Graphical menu techniques can be very powerful in 3D UIs when their limitations can be overcome. Selection of menu items should be easy, and the menu should not overlap too much with the user’s workspace in which the user is working.
Especially with applications that have a large number of functions, a menu is probably the best choice of all the system control techniques for 3D UIs. A good example of an application area that requires a large set of functions is engineering (Cao et al. 2006; Mueller et al. 2003). Medical applications represent another domain that has large functional sets (Bornik et al. 2006). Both of these can benefit from the hybrid approach: using 2D menus on devices such as tablets. Still, the approach of putting graphical menus on a remote device works only when users can see the physical world. For example, it will be useless in an immersive HWD-based system, except when the tablet is tracked and the menu is duplicated in the HWD. Finally, menus have often been used for symbolic input by showing, for example, a virtual keyboard in combination with a tablet interface to input text and numbers.