CES 2017: Tech Trends to Watch
This year’s CES began with two unwelcome realizations: 1) the USB-C connector on my Nexus 5X phone is considerably grippier to the cable than to its internal connections, and 2) my middle-aged tuchus apparently exerts a more considerable lateral force on both than I realized.
So I find myself at the world’s largest tech show, where everyone is promising “futuristic” and “ubiquitous,” and I’m deprived of a smartphone for 48 hours. (A new one is on its way; there’s something to be said for $250 base prices.) It’s mundane to note how central a smartphone is to modern life, but that doesn’t change how devastating it is to be without one: it’s how I know where I’m going, how I get there, what I do along the way, and (when I’m working) a substantial part of what I do when I arrive. Instead, for at least two days, I’m going to be more reliant on a paper pad and pencil than I have since 1999. (And here I’ll pour one out for my beloved Palm
VII and Sony Ericsson P800.)
In any case, welcome back to CES (formerly the Consumer Electronics Show), the annual Las Vegas bacchanal of approximately 165,000 attendees, thousands of exhibitors, and around 20,000 newly announced gadgets, gizmos, and networks that aim to be your next “so important I’d be lost without it” technology. I’ve received press releases on gear ranging from smart spoons and trashcans to infrared iPhone cameras and artificially intelligent telescopes, and I’ll be writing about the slice of newsworthy technology that I’m able to see, because everything here happens in dozens of venues simultaneously spread out over the entire city.
I’ll also try to convey what it’s like to be here. CES is a closed show for industry and press, and while it’s exciting, more of my colleagues on Twitter seem to be expressing joy that they’re not coming than looking forward to working the show. It’s fun but also akin to surfing during a hurricane. CES is the kind of event where, when I took a break, I was thankful for the relative peace and quiet of a casino. This is the 50th anniversary CES, covering 2.6 million square feet of convention space; the Consumer Technology Association, which runs the show, estimates that CES will be covered by 1000 more journalists than attended the Olympics.
I kicked off the show with a presentation by CTA’s chief economist, Shawn DuBravac, who spoke about consumer technology trends in 2017. The first of these is the increasing adoption of voice technologies as a user interface for electronics. DuBravac discussed the switch to the GUI interface in the 1980s, then the use of smartphones in the past decade, and said that voice interaction removes the need for the screens that these earlier technologies required. Each of these interface approaches is, in its way, redundant; anyone who has worked with Terminal knows you can do nearly everything from the command line. But taking excess CPU horsepower and using it for a GUI made for a transformative experience. Likewise, you interact
differently with technology when you’re talking with it, even if that conversation is an “unnecessary” additional way of doing things.
Voice recognition has made massive improvements in the past 20 years. In 1995, 100 percent of all input resulted in voice recognition errors, but today, technology is about as good as humans are at understanding what you say. (People have trouble recognizing what’s said about 5 percent of the time, apparently, so maybe you’re being unfair to Siri.)
Voice interaction is the emerging interface for robots and other technology similar to Amazon’s Alexa or Google Home. DuBravac called this a new era of “faceless computing,” where you can get things done while looking at other things; color me skeptical about whether it’s better. Yes, Alexa can call you an Uber, but I like having my phone show me that the driver is picking me up in front of my house, instead of the back alley that Uber keeps sending them to. That said, for casual interactions, voice is excellent. Look for new capabilities to make this better, such as services that can tell who is speaking based
on their voiceprint, allowing for things like customized search results and parental control of their kids’ interactions.
DuBravac’s next trend is the infusion of artificial intelligence into a wider range of applications. The falling price of both chips and software allows for reasonable intelligence to be installed in just about anything, although I think there’s too much hype around both the terms AI and “smart,” which can apply to nearly anything. Is a refrigerator with sensors and a computer chip “smart” or “artificially intelligent?” No… but on the other hand, it might take care of small chores that no human would bother to do, like micro-adjusting its temperature to best suit the foods it’s storing.
Likewise, it’s a bit hard to tell when a new device is usefully smart, or when it’s taking “smart” a step too far. For example, it would be trivial for a washing machine to count the number of loads it has done, keep track of how much detergent you have left, and automatically order a refill when you need it. Alternatively, you can buy an Amazon Dash button and push it when you need more Tide. Or you could just remember to buy detergent. DuBravac suggested that in the future, maybe 40 to 50 percent of all of our household purchases could be made by designated agents like smart washers.
I have no issues with removing minor annoyances from my life, but it seems a bit Brave New World to me if my appliances do my shopping without any interaction on my part. Perhaps I want to change detergent brands, or some friends gave me an extra bottle when they moved, so I’m not out when the washer thinks I am.
Beyond that, I note that this futuristic vision requires an affluent lifestyle that reliably remains affluent. I doubt the average person living paycheck to paycheck would be well served by having so many automatic deductions from their bank accounts, and woe to the laid-off person who suddenly needs to remember to cancel a hundred different automatic orders.
This leads us to the smart home, part of DuBravac’s next trend, that of widely connected everything-to-everything. He compared smart homes to the adoption of the dishwasher, which was invented in 1893(!), but took until the 1980s to make its way into half of U.S. homes (penetration is around 80–85 percent today). The jump in adoption didn’t happen until the necessary infrastructure was adopted after World War II: indoor plumbing, electrification, and the standardization of kitchen countertop heights.
DuBravac sees ubiquitous Wi-Fi and cellular broadband as the standardized countertop of the smart home; it’s the prerequisite that allows the smart home to be built up piecemeal, from many competing vendors, as opposed to the whole-house model that made little headway in the market previously. Smart home gadgetry will be a global $25 billion market this year, with $3.5 billion of that in the United States. On the downside, market competition that allows on-the-fly adoption also makes for competing standards, which slows mass market adoption.
Rounding out DuBravac’s home connectivity trend will be wearables that enable your house to monitor you. Whether it’s biometric feedback designed to improve your health or interactive environment sensing that lets your home know whether you’re there and where you are, technology you carry on your person will integrate you into a data set about your home, which will let your home be responsive to your needs and expectations.
The path to making smart things smarter comes from the interconnected nature of many of these devices; when one device senses an error condition or learns something new, it can improve the responsiveness of all other devices in its class. That’s how self-driving cars went from an 11-minute failure in 2004 to millions of test miles successfully driven on the roads today. This kind of aggregation puts many devices on an exponential learning curve, enabling rapid development from beta testing to seriously road-tested.
Of course, it’s one thing to predict overall trends; it’s another to see how 2017’s devices are living up to these models. And that’s one thing I’ll be sure to look for in the demos at CES.
Thanks for an interesting article. I look forward to more on CES. I'm skeptical of your statement, "On the downside, market competition that allows on-the-fly adoption also makes for competing standards, which slows mass market adoption." This implies that the alternative that you discuss, "the whole-house model", does not lead to competing standards nor slow market adoption. In fact, the whole-house model shares both of those problems.
Ah, wasn't my intention to imply that. But the difference is that whole-house models were installed at great expense by the contractors, so an expert made sure it was all interoperable. The new model brings prices down to somewhere more reasonable, but it's on the homeowner to make sure that everything they're buying works with everything they already have.
Interesting article. Thinking about the move to voice - and the removal of displays. This could make batteries last much longer in devices. And finally we start to appreciate the health of our hearing. So much has been done on eyesight - spectacles, laser treatments, etc. I hope that the same will start to apply to hearing, both in terms of protection as well as cures.
Yes! Voice interfaces will have more trouble gaining traction if our collective hearing can't keep up.
I've written for TidBITS in the past about my crappy hearing and technology to help — https://tidbits.com/article/11954. At this show, I'm definitely looking for assistive hearing technology, although I'm skipping outright discussions of full-blown hearing aids as I'm not qualified to cover medical technology. But things that fall between consumer tech and medical tech are definitely catching my interest.
'…I doubt the average person living paycheck to paycheck would be well served by having so many automatic deductions from their bank accounts, and woe to the laid-off person who suddenly needs to remember to cancel a hundred different automatic orders.' Indeed. Thanks for daring to look outside of the tech bubble and reflect on the society it's meant to serve!
Sure, that's the whole point, isn't it? It's not saving me any effort to do an automatic deduction if I'm getting four notifications about it, and I *want* the four notifications so I'm not giving up too much autonomy over my checkbook. Personally, even when a deduction is budgeted and affordable, I'd rather spend the ten minutes proactively paying my bill. Giving my technology the power to hit my Amazon account, in general, gives me the heebie-jeebies.
Amen to that!
tuchus?
Yiddish for rear end. ;-)
But does the smart washer know about the sale on detergent at Costco this month?
I took a look at the Dash Buttons available at Amazon. They are legion. Imagine half-a-dozen or more of those sitting on a shelf or countertop somewhere. Yikes! I think it's a gimmick, no more than a toy. Far more useful would be a customizable shopping list on your cell phone, tablet or PC that you could access as needed. Or even a simple WiFi enabled tablet/screen device powered by a modest computer chip you could put on the wall in the kitchen or laundry room. That way you could get video, audio or printed feedback to help you select, place and confirm your order. Or even input the order by voice while reading your choices onscreen. Of course you would need to be an Amazon Prime member to make the process affordable. If you think Amazon is big now, wait till this kind of shopping becomes ubiquitous. Google, Apple, Microsoft? The best they can hope for is to plug their technologies into Amazon. You know, like using Apple Pay on an iPad or Mac. Sadly, that requires an iPhone to work, so it's not exactly an affordable option. Safari will remember my credit card numbers if I let it, which I do not. It's a question of control—and a little bit of paranoia.
Where a device like Alexa falls down is that it cannot show you a list to help you decide what to order. Hence the need for a screen. This is where Siri has real potential. If Apple doesn't lose interest along the way, as they are prone to do. Otherwise we'll be doing our shopping on Samsung and Android devices.
Another feature of the customized shopping list would be the ability to select the product size or quantity you want to order, just as you do in the supermarket or online. If you have to use your cell phone to set up a Dash Button you might as well use the phone to place the order. Obvious much?
The Dash Button illustrates the silly extremes of tech. You know, like the netbook, Windows RT and 3D TV. There will be many others on the road to true automation. Speaking of which, having a device like a refrigerator do your thinking for you is a bad Idea. If we don't want our machines running our lives (the Frankenstein scenario), we need to keep human consciousness and intention in the loop. Which is why I think it's more than premature to write off the screen on our devices. The simple fact is that most of us read faster than we talk. And an audio only conversation with a device to find a selection would be both tedious and frustrating. Example: automated telephone answering systems. We love 'em, right? Not. Our smartphones have gotten larger and more capable in the last few years. Why the bigger screens? So people can see what they're doing. Any voice response system will need to interact with the content on the screen in some way in order to be truly useful.
The question of standards is probably the most important issue raised by these emerging technologies. The Internet, for example, was first established on a small scale by the government and various universities. The small scale enabled the creation of standards organizations that set the rules of the road. As a result, you can use any brand of computer, router, wireless phone, fax machine or smart phone and not have to worry about whether it will work in Milwaukee, Atlanta or Los Angeles. Other standards were also set by early adopters like the technology of phonograph records and telephones, of audio tape, video tape, CDs and DVDs, Television signal bandwidth for broadcasting and receiving—and TV screen resolutions. Even our big flat screen TVs utilize standards based technology. Some companies implement the technologies better than others, but that's always been the case.
New tech often struggles to find a commercially viable standard, like the fight between Betamax and VHS. Blu-ray had a similar, short lived struggle to establish itself. At the moment Apple Pay has a few feeble competitors that may or may not survive. They might win through, though, unless Apple Pay becomes available on non-iOS devices. You know, like iTunes and iCloud on Windows PCs. Amazon set the best example for this sort of thing by making a Kindle reader app for every platform. Adobe did the same with Acrobat Reader. Apple is famously reluctant to do that. But the iPhone and the Mac do not dominate their respective markets. Apple Pay got a jump on the competition, but unless Apple competes more broadly, their lead will fade. Of course Apple Pay is more than an app. It is tied to hardware in the iPhone. But Steve Jobs originally didn't like the idea of third-party apps on the iPhone—for security reasons. But he changed his mind and look where the app market is now. A workaround that will enable Apple Pay to function on other platforms is at least conceivable. How feasible it is remains to be seen. But the need for such a workaround is not hard to see.
Anyway, sorry for the essay if you prefer shorter comments. But since I am a TidBITS subscriber they give me plenty of room to ramble.