Bad UI of the Week: The Menu Bar
Bad UI of the Week: The Menu Bar
The location of the menu bar takes on something of a religious nature when Mac and Windows users come together. When OPENSTEP users are thrown into the mix (assuming that you can still find any), things get even more interesting.
The question of where the menu bar lives has a history as long as the GUI itself. The Xerox Alto didn’t have fixed-position menus at all. Menus were allowed to float around wherever they wanted.
This wasn’t the first appearance of a menu, however. Many text-mode applications had menus, typically at the bottom of the screen, accessed by function keys.
Mac OS and GEM were among the first to define a fixed position for the menu—at the top of the screen. This had two advantages. The first is that it was a constant space allocation. The original Mac had a 512 x 342 pixel display (tiny by modern standards), and GEM was designed to run on machines with similar capabilities. The second relates more directly to usability.
Microsoft Windows, and some UNIX desktops, went in a different direction. They, too, specified a position for the menu, but placed it relative to the window, not the screen.
In the case of the X Windowing System, this was partly a concession to network transparency and the fact that the system didn’t have a standard widget set (actually, it had several "standard" widget sets). It was possible to run X applications remotely, and placing the menu at the top of the screen would have meant that two windows would have needed to be exported, and some complex interaction would have been required by the display server.
The lack of a common widget set meant that X didn’t define how menus were created, and so a common menu bar would have required close cooperation between toolkits.
The second advantage of the single menu bar is more obvious in comparison with the per-window approach. Fitts’ Law is a well-known rule in UI design, which gives an indication of how difficult it is to hit a given target with a pointing device.
The law is usually expressed as a formula, but it basically states that the amount of time taken to hit a target is proportional to the (base-2) logarithm of the distance to the target divided by the size of the target along the line of motion.
What does that mean? The first part is obvious: It takes longer to hit a target that is farther away. The second part refers to stopping time. When you aim for a particular target, you will generally start slowing the mouse down as you approach it. If it’s very small, you might overshoot and have to come back a bit.
If you are trying to hit a single-pixel target on your screen, it takes about as long, no matter where the mouse starts; you get it close quickly and then have to move very slowly toward it or else you overshoot.
Fitts’ Law specifies a couple of empirical constants that are used for scaling. They depend on the user and pointing device, but some ballpark figures are available from various user studies. When testing UIs, I use an IBM Thinkpad with a trackpoint pointing device. When used by me, it gives very high values for these constants and so makes even small improvements immediately obvious.
How does this relate to menu position? There is a special case for objects that are arranged along the edge of the screen. There are a few possible behaviors when the mouse attempts to travel over the edge of the screen. It could wrap or bounce, for example, but most GUIs make it simply stop. This means that objects on the very edge of the screen have an effectively infinite target size along the direction of travel from anywhere other than another location on the same edge.
This is a slight simplification; if you don’t move the cursor so it is directly perpendicular to the edge of the screen, it can slide sideways. You could fudge this by making menus "catch" the mouse, so the only way of moving between them was to move off the bar and then back to it, but I don’t believe anyone does.
The result is that the top-mounted menu bar has effectively double the height of the window-mounted one, making it much easier to hit. Screen corners are even easier, but we generally only have four of them, so they can’t be used as well.
Obviously, this makes the screen-mounted menu superior, right? Well, not quite. The problem comes from the fact that screens these days are quite a bit bigger than they used to be. This means that the distance to the menu also starts to play a significant part in the calculation of the time.
If you are working in a smallish window on a big screen, then the top of the window is easier to hit than the top of the screen (unless the window is close to the top of the screen). If you run applications maximized, you get the worst of both worlds. Exactly which is better depends on how you use the system.
A couple of decades ago NeXT realized that this was a problem, and came up with a very simple solution. Its menu "bar" was a vertical column, not attached to anything. Submenus could be torn off and placed wherever the user wanted as free-floating palettes of options.
By default, the menu lived at the top-left corner of the screen. This wasn’t ideal, and so they created a shortcut. Right-clicking would make the menu appear under the cursor. This had another nice effect familiar to RiscOS users; that common menu actions became subconscious mouse gestures.
There are a number of approaches to displaying the menu bar, and none of them are ideal in all situations. The screen-attached menu doesn’t scale to large screens. The per-window menu is difficult to hit in a lot of uses. The floating menu encourages clutter, and the mouse-invoked menu is difficult for touch-screen users and provides no visual clue to its existence for others.