Skip to content
Thoughtful, detailed coverage of everything Apple for 33 years
and the TidBITS Content Network for Apple professionals
7 comments

Pondering the Social Future of Wearable Computing

The battle for our technological hearts and minds is moving to our wrists and faces. Everyone seems to be racing to put a computer in your watch, while Google is leapfrogging several body parts to put one on your face. Meanwhile, the media have deemed this a hot story, so headlines are rife with predictive analysis and rumor, both of limited value.

Naturally, part of the breathless coverage of these devices — most of which still don’t exist — is the steady stream of pundit predictions about which device will be The One to Rule Them All, as well as which features “must” be included in order to provide their manufacturers Market Dominance for All Time. But the fact is, no one has any clue which devices will be successful, or if any of them will.

That’s because it’s not the companies that will decide what our next toys will be. It’s not even the early adopters who will, no doubt, purchase them in droves.

It’ll be everyone else.

Normal Is Deeply Weird — Consider your smartphone for a moment. I would say, “Take out your phone and look at it,” except that something like half of you are already doing just that while reading this.

If you’re older than 20, you probably remember a time when cell phone minutes were precious and sparsely used, like “long distance calls” in black-and-white movies. If you’re north of 30, you’ll remember when even having a cell phone indicated that you were fantastically wealthy, perhaps due to unethical reasons. Compare that with 2013; while I’m not actually aware of a cereal box that contained an Android phone as the free prize, I can’t discount the possibility. (However, Entertainment Weekly did include one of sorts in an advertisement.)

Unlike most technological changes, we can actually date when this one began: 29 June 2007, when the first iPhone shipped. Before then, it was extremely uncommon for the average Joe to have the Web in his pocket, and I can say that with assurance because I was one of those outcasts. Prior to the modern era of cellphones, you were considered irredeemably weird if you carried around a Sony Ericsson P900, knew how to send email with a Sony Ericsson T68i, or attempted to view Web pages on a Palm VII. (Arguably, attempting to surf the Web on a Palm VII also made you irretrievably masochistic.)

With the release of the iPhone — and with the equally important availability of Android phones at much lower prices — what was once viewed as somewhere between “bizarre” and “scary” has become the norm. (And if you don’t believe me about scary: the first time I held a conversation on a phone headset, a woman at a supermarket who couldn’t see the wire made eye contact and then literally ran away from me.) The division between “bizarre” and “normal” is the demilitarized zone between the early adopter and the general public. It’s commonly understood that early adopters are willing to pay more money and jump through many technical hoops attempting to get their bleeding-edge, wicked-cool devices to work.
More interestingly, they’re also willing to look rather silly doing it.

That is, using technology in public is a social action. An early adopter has to be willing to attract attention, and needs the mindset that “they’re interested in my technology” rather than “they think I’m a schmuck,” regardless of what the truth of the situation might be. Any technology that makes the move from early adoption to general usage has to normalize the behavior of using it.

But the fascinating thing about the shift from “outlier” to “normal” is that normal is a very wide umbrella, which has many holes in it that allow the rain to come in.

Dinner with Your Smartphone — You’re out to dinner with a companion: perhaps a spouse, coworker, friend, or date. Naturally, you both have smartphones, or at the very least, some handheld gadget that connects you in ways that would have been unimaginable in 1993.

Many people do something, consciously or not, with their smartphone when they sit down. If it’s in a back pocket, it has to be moved so as not to test the tensile strength of Gorilla Glass. If it’s in a front pocket, likewise to avoid denting a hipbone. Phones in purses may be shifted to avoid theft or scratches; phones in a jacket pocket may be self-consciously removed to avoid affecting its drape.

What do you then do with the phone? Do you turn off the ringer? Turn on the vibrator? If you set it down on the dinner table, is it face down or face up? If the phone beeps, rings, or rattles over the course of the meal, what’s the acceptable response to its clamor for attention?

Here’s the thing: it’s a trick question. Your answers to the previous questions have nothing to do with the phone, and in most cases, very little to do with your preferences. They were answered earlier, when I glossed over with whom you were eating and where you were. The social norms of two people out on a date are entirely different than the same couple five years later; two coworkers react differently than when one is eating with her boss.

And these norms have dozens of permutations and exceptions. It’s nearly universal that when a phone unexpectedly rings or buzzes loudly, it’s followed by the dive for the button that silences it. But when do you glance at the screen to see what caused the alert, and when don’t you? If you’re having dinner with an old friend or significant other, these might be explicitly negotiated. More commonly, they’re simply assumed, as are the reactions of the people around you. Expect disapproving stares when your phone rings at a funeral or during a movie; on the other hand, if you’re a surgeon, and known to be a surgeon, you’re likely to get a pass.

In the parlance of anthropologists, the culture surrounding mobile phones is “thick”: a set of rules and norms that vary depending upon who you are, whom you’re with, where you are, and dozens of other factors that are all instantaneously processed and coded.

These rules extend to how people near you are allowed to respond. People may stare at you if you talk constantly at Starbucks, but it’s rare that they’ll take your phone and dunk it in your latte. Talking constantly on a cell phone is considered rude but not transgressive; compare that with watching an explicit episode of “Dexter” or an adult movie on a MacBook in public, which might inspire a stronger reaction.

The amazing thing isn’t that we’ve developed these rules; similar rules exist for everything you do during a meal, ranging from the proper use of straws to how you might address your waiter. The amazing thing is that we’ve developed such a thick set of rules for handheld Internet computers after only five years.

That’s the buzz saw that smartwatches and Google Glass have to avoid if they’re going to become successful. It’s not about the technology, and it’s not about the features; it’s about the public act of using them, and whether and how social norms will shift to accommodate them.

Screens 3.0 — Smartwatches are a technology with which nearly everyone has some experience: a wrist-attached gizmo that displays some interesting bit of information, and might demand your attention with a pre-set alarm. That’s a decent description of dumb watches for the past few decades. Watches are also a cultural artifact, with rules about how and when they can be used — ask anyone who has been caught glancing at one during an event that someone else thinks should be captivatingly interesting.

The possibility that you’re smiling at the memory of being this person, or frowning in annoyance about the time someone accidentally demonstrated boredom, indicates the universality of this cultural trope. Glancing at a watch signals impatience and boredom, but only because of what a watch does. You only look at a watch when you’re counting the minutes; everyone else knows that there’s no other reason to look at a watch if you’re not ten meters under water. When a watch might display very different information, the universal understanding of this gesture will change — but much more slowly than the technology does.

Picture that same dinner in three to five years. Your dinner companion glances at his watch — to check the time? Check a Facebook update? Read a text message? Unlike a buzzing cell phone, a watch can be expected to be right next to the skin, so its vibrator may be low-powered and undetectable to anyone but the wearer. You might not know if your companion is choosing to do this or responding to an alert. Does that make a difference? Does the content of the alert make a difference?

Unlike a smartphone, watches are difficult to remove. That is, you can put a phone away, but have you ever taken your watch off in public? Most people don’t take them off at all; some folks even buy waterproof watches so they don’t need to remove them in the shower. Presumably, a smartwatch will allow you to silence its buzzers and alarms just as a smartphone does, but since a watch can alert you more covertly than a phone, turning it off is similarly a more private act. Wear a smartwatch, and anyone with you is likely to have no idea how connected you are — and may draw social conclusions about you based on the wrong inferences.

If a watch is a potential minefield, let’s move the smartwatch to your face.

Google Glass is possibly the worst nightmare of people who believe that their importance is measured by the amount of attention they’re receiving — which is to say, the entire human race. Glancing at a watch may indicate boredom, but reading a smartphone screen screams, “what’s on here is more interesting than you are.” That’s why we have so many social cues that dictate when smartphones are to be left unloved in our pockets. Glass removes not only the need to physically handle a phone to be electronically entertained, but also the cues that allow people around us to know what we’re doing.

Then there’s the issue that Google Glass includes a video camera and microphone that can be activated by winking at it. Many concerns have been raised by the privacy community (of which I consider myself a member) that Glass can record audio and video when it would be inappropriate to do so. Google Glass supporters have fired back, stating that such surveillance can already be accomplished with existing technologies. And even I have given instructions for creating such devices in “iOS Hearing Aids… or, How to Buy Superman’s Ears” (8 February 2011).

But that’s the point. If you own a spy camera disguised as a pen or a hat, you’re either James Bond or you’re a creep. There’s no socially acceptable middle ground that allows you to walk around in public with a spy pen; just having that technology classifies you as a certain kind of person, and one who should probably be kept away from children and cats. On the other hand, everyone has that technology in their phone; you only get classified as a creep if you do creepy things with it. Putting video cameras into every phone not only gave more people the opportunity to be creepy — it also moved the bar for the definition of creepy, as merely owning the technology is no longer sufficient to qualify
you.

Why is a spy pen different from an iPhone? I suggest that it’s because the spy pen has only one function: surveilling people without their knowledge. Likewise, the iPhone has surveillance technology, but its widespread adoption makes it difficult to use without tipping off the subject of a video. Google Glass creates a middle ground: a technology that does allow for discreet surveillance, but also has alternate functions that “excuse” its usage in public.

The fundamental issue, then, is that Google Glass allows for surveillance, and the primary barriers against it are social norms. You have to trust a Glass-wearer that he doesn’t have the camera turned on for recording or live uplink to the Internet. This is in contradiction with the norm that anyone who has such technology is inherently less trustworthy; what’s questionable is whether the norm for spy pens will be applied to Google Glass, and if so, how long that norm will survive.

The Cost of Surveillance — The last twelve years have been eye-opening, and not in a good way, for those of us who have been banging the privacy drum as a social issue. Speaking as a long-time activist, I’ll summarize the last thirty years of privacy activism:

  1. Computerization and big data begin to give private corporations far more information about their customers than ever before. This is also a concern when used by repressive governments, but activism is muted in Western nations as it’s presumed that legal protections will prevent individual and collective monitoring.

  2. Governments begin to attempt to outlaw or restrict some technologies (specifically strong encryption) while expanding their abilities to monitor electronic communication without going through protective channels, such as requiring a warrant. Post-9/11, this trend becomes an avalanche, making camera surveillance and big data processing the norm rather than the exception.

  3. Most importantly from the perspective of social norms, thinking changes about being the target of surveillance. It was formerly “unsurveilled unless suspected guilty” — a presumption that the government, at least, required a reason to track your actions and movements. That has changed to “I have nothing to hide,” which switches the onus of action from the watcher to the target, while arguing that most decent targets should, in fact, be perfectly OK with this.

The corollary to these shifts: I think there used to be a universal perception among privacy activists that at some point, some encroachment on privacy would be the straw that broke the camel’s back, causing a massive uproar that created stronger protections. And in fact, this happened: you can rest assured that your videotape rentals are safe from prying eyes. For everything else, though, a lack of concern is socially normative, unless of course you “have something to hide.” Why else be concerned?

The problem with this comes when we consider social norms about shame and transgressive activities. Tyler Clementi and Rehtaeh Parsons were both driven to suicide after video of their “transgressive” activities were circulated on the Internet. In Clementi’s case, he chose to kiss a man; in Parson’s, she was attacked and raped by multiple assailants.

It has been quite a long time since American and Canadian culture viewed suicide as a justifiable action following homosexual behavior or being a rape victim, but our norms about promiscuity in general and resulting video evidence are still several decades behind our actual actions. I doubt I’m alone in saying that whatever my friends do behind closed doors is their business, while at the same time, I’m certain that none of them are the “type of person” who have nude videos of themselves on the Internet.

What “type of person” is that? Logically, it’s potentially anyone who’s ever been nude with the lights on; that’s the set of all people who might have chosen to film and circulate their videos, or who might have had this done to them without their knowledge. Ergo, my thinking about this personality type is rather illogical.

What’s particularly telling about this example is that I had to switch to sexual activity to discuss a shameful or transgressive act that I thought most readers could relate to. Few people would be happy if their GPS data showed them going to a bordello while on a business trip to Reno; most married people would rather not have their GPS records correlated with their coworkers to show they were both in a hotel near the office at noon. But there might be entire categories of shameful activities that I’m socialized to ignore, and won’t notice until the first case occurs. (Arguably, the exceptionally tragic aspect of the Rehtaeh Parsons case is my belief that “being raped” was left behind as a reason for shame decades ago.)

What Google Glass portends is a future where such surveillance is (a) imposed on its targets without their control, and (b) in the hands of private citizens. To my way of thinking, it’s currently a bigger but different issue when a government with the power of prosecution has such data. In fact, widespread surveillance of the government is a useful check on its abuse of power.

But the problem with putting this in the hands of the general public, not to put too fine a point on it, is that some small percentage of the general public are bastards. The norms we develop about appropriate use of the technology will be broken, and the norms we currently have about people being “caught on film” will take a long time to catch up to the realization that it could be anyone.

Where might we be headed if we don’t take the reins of our cultural and legal environment? One possibility is a future where we can be placed under government scrutiny for saying the wrong thing or associating with the wrong people; one where we can be shamed in public if we aren’t self-censoring our private activities at all times. The problem with this is that the red line of transgression moves all the time; the wrong person, the wrong statement, and the shameful act may be very different ten years from now, but our data probably will live longer than that.

Placing Bets — It’s easy to presume from the prior discussion that I’m opposed to video cameras in general, and Google Glass in particular. I’m not; I would have gotten myself into the private Glass beta if the price weren’t ridiculous, and it’s a safe bet I’ll be an early adopter of whatever comes down the pike in 2014. I’m extremely concerned about what happens when most people are wearing Glass-like devices — but I think the solutions to this are legal protections and a more actively evolving culture about whom we deem to be “bad people.”

That said, I can make a few predictions about where the field will go, based on the cultural observations I’ve made here, and on how we’ve already adopted technologies like smartphones, GPS, and webcams:

  1. Smartwatches are likely to become standard sooner than Google Glass. We’re already comfortable with handheld screens, and there’s little question than many people do want to get text messages and notifications on their wrists. The questionable part is whether our cultural norms will change to make smartwatches acceptable. My guess is that they will, but not without a great deal of kicking, screaming, and letters to relationship advice columnists. The interesting things to watch, so to speak, will be whether there are places and times when smartwatches will be considered rude; likewise, widespread adoption of smartwatches might expand the realm of places where using a smartphone more overtly is socially unacceptable.

  2. Google Glass is ultimately what’s next, but it might be too soon — the Newton of 2013. A wearable heads-up display has been part of science fiction for decades, and among a certain part of the viewing and reading public (of which I also consider myself a member), the response has been “that is so damn cool.” The problem is that Google Glass stretches too many social norms, possibly past the breaking point. In its first generation, Glass is immediately recognizable; future models that make Glass less obtrusive might make the technology fall more squarely into the creepy spy pen category, not less. I reluctantly believe that video cameras in watches and headwear will do little to slow the adoption of such
    devices; on the other hand, their unique ability to follow us soundlessly into bedrooms and private areas (and instantaneously broadcast that information out) could extend the creepy definition to them in a way that it has not extended to smartphones.

  3. The unsolved problem with both of these technologies is the control mechanism. It has become socially acceptable to have a brief phone conversation in public, but it’s not generally kosher to use Siri or voice dictation. That’s the primary way to control Google Glass, and probably the way you’ll do complicated things on a smartwatch. (Unless there’s a major breakthrough for user interfaces on a two-inch square surface.) I think voice control of smart devices is shortly going to become much more common and will generate new norms of public behavior; I wouldn’t be surprised if, just as we have the “quiet car” on Amtrak, we might soon have “quiet” cafes and restaurants.

  4. Where I expect Apple to dominate is in pushing this out of the early adopter phase and into the mainstream. Apple will do the heavy lifting of coming up with a useful (by general standards), underpowered (by geek standards), and simple (by your parents’ standards) way of interacting with your other Apple devices through a watch. That’s not to say that I expect Apple to “win” the smartwatch competition automatically, only that it’s Apple’s involvement — and perhaps only Apple’s involvement — that transitions a smartwatch from “geeky toy” to “the kind of thing you see everyone wearing the airport.”

Subscribe today so you don’t miss any TidBITS articles!

Every week you’ll get tech tips, in-depth reviews, and insightful news analysis for discerning Apple users. For over 33 years, we’ve published professional, member-supported tech journalism that makes you smarter.

Registration confirmation will be emailed to you.

This site is protected by reCAPTCHA. The Google Privacy Policy and Terms of Service apply.

Comments About Pondering the Social Future of Wearable Computing