Let’s take a trip on Mr. Peabody’s WABAC Machine back to 1984 to look at the first Macintosh. The huge change it introduced to the early consumer computer market was the shift from a screen filled with text that you navigated with cursor keys to a graphical display that you navigated with a pointing device. At the time, many thought the mouse was a gimmick.
It was a huge change, indeed: cursor keys and pointing devices are extremely different interface paradigms. However, both limit your interaction with what is on the screen to a single point of contact. For the text interface, it’s the text cursor; for the graphical interface it’s the mouse pointer. No matter what kind of data is on the screen, you can interact with it only at a single point.
That single point of contact, whether mouse or keyboard driven, has a significant drawback: travel time. When your cursor is at the bottom of a block of text and you have to hold down on the up arrow key to get it to the top line to fix a typo, or when you’re drawing in the middle of the screen and you have to move your mouse to the menu bar at the top of the screen to pull down a menu, the travel time involved takes you away from the current task, possibly hurting creative momentum. This isn’t a usability tragedy, but it is a drawback.
To mitigate the travel-time issue, engineers developed workarounds like the Page Up and Page Down keys, key combinations for traversing text, keyboard shortcuts for menu commands, and keyboards with function keys that could be programmed to perform various actions or multiple keystrokes. So-called “power users” (I’m looking at you, Adam!) adopted these workarounds quickly, and keyboard shortcuts remain in common use today.
Of course, such shortcuts have their own drawbacks: users must remember which function keys or key combinations do what. Even with some standards imposed (such as Command-X, Command-C, and Command-V always being assigned to the menu commands for Cut, Copy, and Paste on the Mac), users must still learn keyboard commands — they are not intrinsically intuitive.
There is one situation in which two points of focus have long existed on the Mac: text editing (and, by analogy, any editing situation that presents content in a linear sequence or timeline, such as audio or video editing). For example, you can select text, move the mouse pointer to a color setting palette to change the selection’s color, and keep that palette focused under the mouse while, using the keyboard, you move the text cursor, select another range of text, allowing you just to click to change that selection’s color.
Nonetheless, travel time friction, and the related risk of momentarily losing your place in your content when you move your pointer to perform an action, is still a central part of the Mac experience.
A brief digression: what about multi-touch, Apple’s mouse-less graphical interface introduced in iOS? Multi-touch can reduce travel time friction and it can give you multiple points of focus on the screen, but it isn’t a panacea. For example, fingertips don’t save you from losing your place in your content: unless you have transparent hands, the very act of pointing at something hides the thing at which you are pointing. Nor is the memory burden created by memorizing function key and shortcut keys eliminated: in a multi-touch interface users have to remember the differences between a single and double tap, what two-finger taps do, which apps and which objects in those apps respond to 3D Touch, and so on.
The Touch Bar doesn’t completely solve the travel-time and loss-of-context drawbacks inherent in the traditional Mac experience, but it does address some of the problems introduced by previous workarounds. Take function keys: the Touch Bar’s dynamic context-driven display reduces the memory burden they present, assuming controls it presents are well-labeled.
More importantly, the Touch Bar essentially gives you a second point of focus for manipulating your content. It doesn’t just offer you contextual function-key-like buttons, but dynamic controls like sliders. You can, like a pianist or guitarist, play your Mac with two hands. Professional video editor Thomas Grove Harper describes the experience:
The first revelation for me was the potential of sliders. Gradual, precise and fast inputs. For years we’ve had single mouse inputs on a graphical user interface. Over time we’ve added more buttons and scroll wheels, trackpads with gestures. The Touch Bar takes this step further by allowing multiple inputs at the same time and combines well with the trackpad. The more I’ve used it the more I’ve replaced certain keyboard shortcuts. […] It works, it’s faster, and it’s more productive.
Despite those glowing words, it is too early to come to any conclusions, pro or con. Developers are only gradually adapting their apps to take advantage of the Touch Bar and it will take time for developers to figure out all the ways in which they can exploit this new interface paradigm.
In terms of both its potential and usability, today’s Touch Bar is roughly where the mouse interface was on the 128K Mac. In addition, Apple offers it only to those who buy a MacBook Pro. As such, anyone who calls the Touch Bar “revolutionary” is overreaching. For that to stand a chance of happening, Apple needs to make it a standard part of the Mac experience for desktop Macs as well. Otherwise, it will remain a gimmick in the way that the 128K Mac’s mouse did not.