Skip to content
Thoughtful, detailed coverage of everything Apple for 33 years
and the TidBITS Content Network for Apple professionals
18 comments

Mirror, Mirror in Your Brain, Can You Help the Computer Explain?

For many years, I have tried to understand how otherwise smart people, including professionals who synthesize massive amounts of information in their day-to-day work, cannot seem to master a desktop user interface. In contrast, many of these folks seem to have no trouble with an iPad. The iPad is the first device I’ve seen that you can hand to nearly anyone and have that person master basic functions right away.

An older neighbor has an ancient dial-up mail appliance that has started to sputter. Her stepson attempted to move her to a Windows laptop, but she would have none of it. That’s not a critique of Windows: I wouldn’t have tried to get her on a Mac, either. But she became interested in the iPad, so her stepson took her to an Apple Store for a demo. I then sold her an original iPad I no longer needed — her stepson had already set up broadband and Wi-Fi in her house — and walked her through using it.

Though she made several written notes, and claimed she would never remember what did what, I was confident that the iOS experience would work for her. Sure enough, later in the day, she sent me an email from the iPad. But it wasn’t simplicity that allowed her to use an unfamiliar device so quickly. What’s the key? I think I finally have it, and would love your opinion, dear readers.

Daniel Goleman’s book, “Emotional Intelligence,” introduced me to the concept of mirror neurons — a specific kind of neuron that fires when you perform an action or when observing others performing the same action. It’s speculated, and some evidence has accrued, that these neurons enable us to build a model in our head of how other people act — a kind of internal simulation. This would explain how we have conversations (and arguments) with people we know in our head, and can anticipate others’ responses to our points. (There is some skepticism about mirror neurons, but I’ll run with
the most convenient current explanation.)

This led me to examine my own use of a computer. When I work with a program with a graphical interface, I anticipate and model the expected future behavior of the program while I’m interacting with it. I can imagine what the program’s response will be to nearly any action. It’s essentially a running simulation of the program’s user interface operating inside my head, at the very same time as I’m interacting with it. As I learn the interface better, much like getting to know a person better, my internal model adjusts itself to match the real interface more closely.

You can also liken this to how programmers think. I consider myself on the low end of professional programmer, despite thousands of hours — but not many thousands of hours — engaged in the task over the last 20 years. But the mark of a programmer is being able to run a program in her head. Such people have a built-in C or Java compiler or perl or PHP interpreter in there. They may not be able to perform loops with a million lines of unique input, of course, but they know how the loop will function and what manipulations will be performed on the inputs. After all, you can’t write code effectively by punching it into an editor and wondering what will happen.

The folks I know who can’t master the traditional graphical interface seem to lack this internal simulation of what the computer will do in response to input — they’re not using the skills on which they rely when interacting with people to anticipate future behavior. For them, it’s like playing a game of Whac-A-Mole where they don’t know where something will pop up next, but they’d better hit it. Or, it feels like a sort of perverse Skinner Box experiment. Without an internal model, every response from the interface is a surprise; the user can never anticipate and thus interact fluidly with the interface.

If you’ll accept this view so far, this would also explain why some interfaces drive us bonkers. I’m not naming any names. But it’s not just a different or more difficult way of doing things. Most modern operating systems, desktop or mobile, let you carry out the same kind of tasks with the same number of steps or ease — more or less. But if you move among interfaces, and don’t use an unfamiliar one much, it’s like the normal person you spend the day with, interacting via keyboard, has been replaced by some weirdo.

Think of it as if you came to work one day, and you had this conversation with a person who was sitting in the chair of a co-worker with whom you’d shared an office for years.

“Hi, Bill!”

“Who are you?”

“I’m John, your new officemate.”

“What happened to Justin?”

“He’s fine, but he’s out for a bit. I’ve been trained in all the same tasks, and I have all his manila folders right here, and I’ve been brought up to speed.”

“Well, okay. I’ll miss Justin a lot. Let’s get to work on the Wilson file.”

“OK, if you’ll just tell me which of these folders it’s in.”

“That one over there.”

“Which one? This one?”

“No, no, that one!”

“No need to get huffy. Now, if you’ll hand me a red pen, I can start to mark up the paperwork.”

“Justin always used a blue pen. And, anyway, I don’t understand those notations you’re using.”

“I can teach them to you. Almost everyone else in the office uses this kind of mark-up. Won’t take you more than a few days to get used to them, and maybe a few weeks to memorize them. Anyway, it’s time for lunch.”

“Sushi?”

“Never touch the stuff. But I know you’ll love a hoagie. In fact, I insist.”

Over time, one of two outcomes is likely. Either you’ll get used to John and his foibles (as you think of them), and you’ll figure out how to work as efficiently as you did before with him, and grow to love hoagies. Or you’ll find John so maddening and inscrutable that you threaten to quit unless he’s transferred to a new position.

Where does the iPad fit into this theory? The iPad is literally more of a blank slate than any desktop interface. It requires that you build less of a model, because it already conforms to many physical and real-world conventions, requiring less internal modeling to interact with.

Consider Apple’s use of gestures. They aren’t exactly intuitive, because although you don’t need to be taught to move your fingers, the specific actions aren’t the sort of thing we do every day. But they do mimic our expectations of a physical experience, relying on existing experiences as the base on which comfort with the interface is built.

The iPad’s insistence on full-screen apps shouldn’t be dismissed, either. There’s no management of items, but instead just a canvas on which activity occurs. We at TidBITS have talked before about how the iPad becomes the app you’re using. But that’s almost literally true in the mind. If you can play a game by using gestures, and don’t have to manage a keyboard, a file system, or a desktop on which you might accidentally click, that’s not just less to learn, it’s less to simulate.

This theory has just started to percolate through my fevered brain, and I wonder how you work. When you interact with a graphical interface is it your friend or foe? Can you anticipate your Mac’s or iOS device’s every move? I’m betting that’s true of those who read TidBITS, but if you get a chance to ask someone who has more trouble with traditional interfaces, see if you can determine if this inability to anticipate future behavior lies at the heart of the problem.

Subscribe today so you don’t miss any TidBITS articles!

Every week you’ll get tech tips, in-depth reviews, and insightful news analysis for discerning Apple users. For over 33 years, we’ve published professional, member-supported tech journalism that makes you smarter.

Registration confirmation will be emailed to you.

This site is protected by reCAPTCHA. The Google Privacy Policy and Terms of Service apply.

Comments About Mirror, Mirror in Your Brain, Can You Help the Computer Explain?