Skip to content
Thoughtful, detailed coverage of everything Apple for 33 years
and the TidBITS Content Network for Apple professionals
94 comments

Have We Entered a Post-Literate Technological Age?

Not long ago, Google produced a video that’s making the rounds on the Internet. In it, a Google employee asks people in Times Square in New York City a series of questions, such as “What is a browser?”, “What browser do you use?”, and “Have you heard of Google Chrome?” (Chrome is Google’s new Web browser; it’s available for Windows and in pre-release test versions for the Mac.)

Among the geek set, the video has gotten a lot of play because most of the people in the video – who appear to be functional adults and who use the Internet regularly – come off as highly clueless. According to the video, only 8 percent of people queried that day knew what a browser is.

The video is clearly not a scientific study, and suffers from horrible methodology. It’s likely, for instance, that simply asking “What is a Web browser?” would have produced better results, and the middle of Times Square is undoubtedly not where most people are thinking about the names of programs on their computers. But let’s leave aside such criticisms for the moment.

What’s Your Browser? Instead, let’s take the results on face value and consider their implications. What does it say about the technological world in which we live that 92 percent of the people asked could not identify the name of the program they use to access the Web? If other statistics are to be believed, browsing the Web is the primary use of computers today, so that’s saying these people couldn’t name the program they use more than any other.

Worse, some of the answers on the video reveal that they don’t even know what a program is. A number of them identified their browser as “a search engine” and “Google.” When asked which browser he used, one guy said “the big E,” undoubtedly meaning Microsoft Internet Explorer, which has a stylized lowercase letter E as its icon.

When the best someone can come up with is a vague recollection of a program’s icon, it says to me that we’ve entered a “post-literate” technological society, one in which people have lost not just the ability to read and write about a topic, but also the ability to speak about it, all while retaining the ability to use it.

As someone who earns a living crafting text to help people learn how to use technology, I found myself profoundly troubled by Google’s video. After all, if someone doesn’t know what browser they use, or even that a browser is a program on their computer, how could I possibly expect them to be interested in buying my company’s “Take Control of Safari 4” book (written, with infinite care, by the estimable Sharon Zardetto)? How could they even learn of its existence, if they had no idea that Safari is a Web browser or that they were using Safari?

(One concern that I don’t explore further in this article are the implications of a post-literate technological society for marketing technology itself – will even technology marketing be forced to rely solely on pretty pictures and emotional appeals? In fact, are we already there? Apple’s “I’m a Mac” ads help customers identify with the actor playing the Mac but give little solid information, and Apple conceals many technical specifications about the iPhone.)

But perhaps I’m barking up the wrong tree, and Google’s video in fact shows that we’ve taken great technological strides. TidBITS editor Glenn Fleishman, when we were discussing the video, suggested that it’s a good thing that the Web browser has become so ubiquitous that people need not know what it’s called to use it effectively.

(Linguistically, this same devolution has happened with the Web itself. Although it’s TidBITS house style to capitalize “Web” – a proper noun that’s a shortening of “World Wide Web” – it’s commonplace to see even professionally edited publications lowercase the word, thus de-emphasizing the fact that it’s a unique thing. I think they’re wrong: “Web” should always be capitalized, as should “Internet.”)

From a usability stance, I think I agree with Glenn – it’s a good thing that using the Web has become so easy that a myriad of people can do so without even knowing the name of the tool they use to access it. Most people just use the browser that comes bundled with their computer, and despite the issues with Microsoft Internet Explorer over the years, Firefox has garnered only a bit over 20 percent of the browser market since 2004 – largely from the small subset of people who know what a browser is.

On a platform like the iPhone, it’s even easier to see this trend toward obscuring the identity of the browser. Although Safari is the iPhone’s Web browser, and its icon is clearly named, applications like Twitterrific can display Web content internally, and others, like Mail, can open a Web link in Safari without ever informing you that Safari is displaying your page. It would be difficult to quibble with someone who didn’t realize that their iPhone browser was Safari, when in fact, much of the time they would be viewing the Web via some other app that piggybacks on top of OS X’s WebKit core.

Tied up in all of this is the fact that if what’s bundled with your computer or phone just works, you don’t need to learn much more. Dissatisfaction is the mother of exploration – only if Safari or Internet Explorer isn’t meeting your needs do you have much impetus to learn about and switch to Firefox. So the better technology works, the less we’ll learn about how it works. I can’t say that’s entirely a bad thing.

When the Thing Breaks — But I remain troubled by this post-literate inability to talk about everyday activities and the tools used to perform them, using the proper nouns that are not only generally agreed-upon by those in the know, but with which the graphical representations of those tools are clearly labeled. What happens when something goes wrong, and such a person can’t connect to the Internet at all? Can you imagine the tech support call?

"Hi, this is tech support. How may I help you?"

"I can't get on the Google."

"OK, what browser are you using?"

"I told you - Google."

"Let's step back for a second. What program are you running on your computer to access the Web?"

"I don't know - I just Google when I want to find something."

"Perhaps we should go a bit further back. What icon do you click on when you want to use Google?"

"The picture? It's blue and kind of round, I think."

"OK, that's probably Internet Explorer. Can you load any Web sites other than Google?"

"If I can't get on Google, how can I load any other Web sites?!"

I could draw this out further, but it’s not far-fetched (TidBITS staffer Doug McLean confirmed that my contrived dialog was painfully reminiscent of tech support calls he took in a previous job). In essence, the caller and the support rep don’t share a common language. They may both be speaking English, but that’s as far as it goes, and as soon as domain-specific words like “browser” come into play, communication breaks down. A good support rep would undoubtedly adjust his questions upon realizing that there’s a terminology barrier, and like Captain Kirk meeting an alien, would attempt to build up some shared terminology based on visual appearance before attempting to solve the problem.

Generational Problem Solving — If I asked you to tell me something about the caller in my fabricated script above, you might fall back on stereotypes and describe the caller as being elderly, or at least as someone who didn’t grow up with technology and therefore has come to it, perhaps grudgingly, later in life. But what if I told you it could be a college student?

My neighbor Peter Rothbart teaches music at Ithaca College, and he’s been noticing a disturbing trend among his students. Although they’re capable of using the digital music software necessary for his courses, he says that many of them have trouble with the most basic of computer tasks, like saving files in a particular location on the hard disk. Worse, if something does go wrong, he finds, they have absolutely no idea how to solve the problem.

These aren’t the sort of kids who are befuddled by high school – they’re students at a well-respected institution of higher education. (It’s the alma mater of Disney CEO Robert Iger, for instance.) No, they’re not computer science majors, but they’re not being asked to program, just to use off-the-shelf music software and perform commonplace tasks. And now those commonplace tasks are not only something that they apparently have never had to do, but lack the skills to figure out on their own.

Could this inability to solve a problem with a device with which they are otherwise familiar be a result of losing some ability to talk about it? I wouldn’t go so far as to say it’s impossible to troubleshoot without terminology, but it’s less radical to suggest that troubleshooting will become more difficult without being able to communicate effectively with people who are experts in the field.

Not all that long ago, when adults had trouble getting something working on a computer, they would sarcastically say that they needed a teenager to explain it to them. That was largely true of those of us who were teenagers in the 1980s and 1990s, but if Peter Rothbart’s experience is at all representative, today you’d be better off finding a 30- or 40-year-old geek to help.

Don’t get me wrong – I’m not saying that all young people are incapable of solving technical problems or going beyond the basics. My friend Dave Burbank, whose full-time job is as a fireman in the City of Ithaca, is also a serious geek known for taking hundreds of photos on his kids’ class trips, posting constant updates via Twitter, and updating a photo Web site for the trip before turning in each night. His 15-year-old son Istvan is currently a 3D animator at Moving Box Studios in Ithaca and is perfectly capable of maintaining a technical discussion on the evolution of backup media and other such geeky topics.

In other words, there will always be geeks, and in my mind, that’s a darn good thing. The technological sophistication of those people of my generation (I’m 41 now) who were interested in technology created the meme that young people were fluid with technology. But what we all missed was that being fluid with technology doesn’t mean you understand how it works or can fix it when it breaks. Being able to dash off text messages on a mobile phone demonstrates fluidity; being able to troubleshoot a dead Internet connection down to a corrupted preference file or flaky cable demonstrates understanding.

So what will most members of society do when something on their computers or smartphones fails to work? Let’s not pretend that problems won’t happen – technology may have become more reliable over time, but the rate at which things go wrong even for undemanding users is still shamefully high.

Just recently, my father called because his iPod wouldn’t show up in iTunes. After some back and forth, I suggested that he reset the iPod, and when he went to use it, he realized it was indeed entirely frozen. A hard reset brought it back to life and resolved his problem, but had he been on his own, it’s possible that he – or at least someone less experienced than he is – would have concluded it was broken and bought another one.

This isn’t a new concern. In 1909, E.M. Forster wrote a piece of early science fiction, “The Machine Stops,” in which he imagined a future in which face-to-face contact was considered bizarre, humanity lived underground, and the “Machine” fed all our needs. Of course, one day…the machine stopped. More recently and amusingly, consider the Pixar movie “Wall-E.”

Cars and Computers — The obvious analogy in today’s world, and one that several people have suggested in response to our discussions, is the car. At one time, knowledge of keeping a car running was a kind of patriarchal rite of passage. Failure to monitor oil levels, radiator fluids, and other factors could lead to a dead horseless carriage.

Few people know how cars work these days, and even those of us who do have a basic understanding of them can’t really work on a modern car. If the car stutters when accelerating, or sometimes won’t start, most of us simply take it in to the repair shop and get it fixed. Problem solved with the application of money, and of course, since cars work relatively well these days, much less monitoring is needed. When was the last time you checked your car’s fluids?

Like so many automotive analogies, this one sounds good, but suffers under scrutiny. In part, repairing cars has become a specialty not so much because intelligent people couldn’t understand what’s wrong or figure out how to troubleshoot it, but because the training and equipment necessary to diagnose problems and effect repairs have themselves become highly specialized. Gone are the days when you could fix a car with a few screwdrivers and a set of wrenches. The shops all download data from the car computer for diagnosis.

But the more serious problem with the analogy is that cars are single-purpose machines – they do one thing, and they do it moderately well. Thus, the type of problems they can suffer, while troubling, frustrating, and sometimes seemingly inexplicable, are still relatively limited in scope, more like a household appliance. How often do you have to check the inner workings of your washing machine or refrigerator?

In contrast, computers are general purpose machines that can perform a vast number of wildly different tasks, such as browsing the Web, reading email, writing a book, developing a company budget, tracking a database of customers, composing music, editing video, and so on.

We have up-and-coming geeks like Istvan Burbank, but even bright young men like Istvan have their limits. While I’d happily ask him to fix a Mac that’s not booting, I’m not sure he’d have any idea how to help if I showed him a PDF where the text on some pages appeared darker and bitmapped when viewed in certain PDF readers (even Adobe hasn’t been able to fix that problem reliably for me). There’s a limit to how much any one of us can learn, but there’s no limit to what a computer can do.

In a way, this is an odd situation for those of us who grew up with the personal computer. Before Apple, before the IBM PC, we had mainframes and minicomputers that we interacted with via dumb terminals. You couldn’t do all that much, and you were sharing resources with many other people, but you also didn’t have to worry about things going wrong as much, because when they did, the computer operators would fix them.

They were the gatekeepers, the wizards who controlled access and could say who was allowed to do what. Personal computers were supposed to democratize computing so anyone and everyone could do their own work. While that’s come to pass in some ways, it seems to me that we’ve returned to the days when you need a wizard to solve problems or do anything beyond the norm. It’s a somewhat uncomfortable situation, since those of us who grew up with personal computers are finding that we’re the new wizards.

Technological Illiteracy — So how did we get here? I’d argue that Apple – and we Macintosh users – are perhaps more to blame for this state of affairs than any other group. After all, no one has championed usability like Apple, with the Mac’s vaunted ease-of-use. For years, many Mac users scoffed at manuals. “Why would anyone need a manual when the program is so easy to use?” they’d ask. It was a fair point, for the users of the time, who were highly interested in the technology, well versed in how it actually worked under the hood, and amenable to poking and prodding when things didn’t go right.

But then we got our wish, and ever more companies started writing software that was easy enough for most people to use without reading a manual, at least at some level. That was the death of documentation, a phrase I first coined more than 10 years ago (see “The Death of Documentation,” 1998-05-04). Of course, it was really the death of the manual, and technical books have remained popular, in part because of the lack of the manual (how else could David Pogue have made a mint on his Missing Manual series?).

Even still, back when I started writing technical books in the early-to-mid 1990s, the average computer book would sell about 12,000 copies. Today, despite a vastly larger audience (though with much more competition), 5,000 copies is considered acceptable.

I’d argue there was a more insidious effect from the loss of manuals – it caused an entire class of users to become technologically functional while remaining technologically illiterate. When I asked my mother-in-law, Linda Byard, what browser she used, she became somewhat flustered and guessed at Outlook. This is a woman who uses the Web fluidly and for all sorts of tasks far more sophisticated than simply browsing static Web pages. And yet, the fact that she used Internet Explorer to do so escaped her.

As the conversation proceeded (and keep in mind that my father-in-law, Cory Byard, helped design personal computers for NCR back in the 1980s and now consults on massive database projects for Teradata – Tonya didn’t grow up in a technologically backward household), it came out that Linda had stopped reading about how to use technology when manuals gave way to inferior online help.

She didn’t stop learning how to use various programs, but without any sort of formalized instruction or written reference, she lost the terminology necessary to talk about the technology she was using. Of course, she had Cory around to fix anything that went wrong, and she said that the same was true of all her peers too – there was always someone technologically adept in the family to deal with troubles.

Although it’s harder to pin this loss of technological literacy on the lack of manuals when looking at schoolkids, the problem isn’t necessarily being addressed there either. When my son Tristan was in second and third grade in the public schools in Ithaca, NY, the closest he was taught to computer skills were typing (not a terrible idea, but tricky for kids whose hands aren’t large enough to touch-type properly) and PowerPoint.

Although some level of presentation skills are certainly worthwhile, why would you have second graders focus on something that’s guaranteed to be different (if not entirely obsolete) by the time they’re in college?

I’d argue that some of the basics of technology – the concept of a program as a set of instructions and the essentials of networking – would be both more compelling for kids and more useful for understanding the way the world works later in life.

When TidBITS contributing editor Matt Neuburg tried to teach a group of his friends’ kids REALbasic one summer, he found himself frustrated at almost every turn – they lacked the conceptual underpinning that they could make the computer do something. And more important, they didn’t care, since they were accustomed to technology just working. It wasn’t until he got them to draw a stick figure and, by changing the location of its parts repeatedly, make it walk across the screen, that one of them said, “Hey, this must be how my video games are made.”

And networking? No, you don’t need to know it works to use the Internet, but isn’t it wondrous that an email message sent to a friend on the other side of the globe in Australia is broken up into many small pieces, shuttled from computer to computer at nearly the speed of light, and reassembled at its destination, no more than seconds later? Wouldn’t it be fun to act out a packet-switched network with an entire class of second graders and the pieces of a floor puzzle? Or at least more fun than PowerPoint?

Luckily, this lack in the public education system isn’t uniform. Glenn Fleishman’s son Ben is about to enter a public elementary school in Seattle, where the beginning curriculum teaches kids about opening, saving, and printing files; later, it moves to task-based – not program-oriented – computer projects. That’s much better.

But I digress.

Illiteracy Stifling Innovation? My more serious concern with our society’s odd fluency with a technology that we cannot easily communicate about is that it might slowly stifle innovation. Already we’re in a situation where browser innovation is almost the sole province of Apple and Microsoft, with contributions from Mozilla, Google, and maybe Opera.

Iterative changes from the incumbents can be worked in, since everyone will be forced to accept them, but does it become harder to convince most people to try a ground-breaking new technology because it’s different, because it’s talked about using strange new terminology, and perhaps because no paradigm-shifting new technology can by definition be so easy to use that it doesn’t require some level of training? I fear that might be the case.

In the dawn of the computer age, the stakes weren’t as high and the market wasn’t as large, so I’d suggest that companies were more likely to take risks on innovative technologies that might appeal to only a small subset of the population. Today, with everyone using technology, I suspect that business plans and funding proposals all assume a large potential audience, which in turn causes the ideas to be vetted more on their business chances than their technological innovation.

Put another way, there have always been technological haves and have nots, but since there was no chance of selling technology to the have nots, technology of the past was less limited by the literacy of the audience. Since the technologically illiterate are not just buying technology now, but are the primary market for it, that has to be affecting the kind of ideas that get funding and are being developed in a real way.

Plus, think back to the point about dissatisfaction being the mother of exploration. We geeks may be willing to belly up to the new technology feeding trough since we’re never satisfied. But once technology reaches a certain plateau of working well enough, if this lack of technological literacy is indeed a more general concern, spreading technological successes into the population as a whole may become all the more difficult.

I’m fully aware that my musings here are largely hypothetical and based on anecdotal evidence. But I think there’s a new technology on the horizon that could serve as a test of my theory that anything sufficiently innovative will face an uphill battle due to the technological illiteracy of the user base: Google Wave.

For those who didn’t see Google’s announcement of Google Wave (we didn’t cover it in TidBITS at the time because it was a technology announcement, not a service that people could use), it’s a personal communication and collaboration tool that’s designed to merge the strengths of email, instant messaging, wikis, and social networking services. (You can read more about it at Wikipedia.)

On the plus side, Google Wave has the power of Google behind it, and Google could potentially merge it into Gmail, thus introducing it to 146 million users nearly instantaneously. But Google Wave will undoubtedly be quite different from Gmail, and will require a learning curve. Will that hamper its adoption, since email and instant messaging and other services work well enough that people aren’t sufficiently dissatisfied to learn about and try Google Wave? Only time will tell.

Subscribe today so you don’t miss any TidBITS articles!

Every week you’ll get tech tips, in-depth reviews, and insightful news analysis for discerning Apple users. For over 33 years, we’ve published professional, member-supported tech journalism that makes you smarter.

Registration confirmation will be emailed to you.

This site is protected by reCAPTCHA. The Google Privacy Policy and Terms of Service apply.

Comments About Have We Entered a Post-Literate Technological Age?