Computer screens of all sizes have been a constant in our lives for decades. But what if you suddenly couldn’t use any of them? How would you maintain a semblance of your connected lifestyle? Could you make do with just sound?
I faced this dilemma in August 2022 after I lost control of my bicycle in downtown St. Paul, went flying, and slammed my helmeted head onto the asphalt with a sickening crunch. I suffered a nasty concussion. Soon afterward, my world began to go dark.
I was in the extreme throes of photophobia, a kind of light intolerance that caused ambient illumination to feel like knives being shoved into my eye sockets. This turned out to be a common concussion side effect, one of several I suffered over several months. As my condition worsened, and I started wearing a sleep mask all the time to make the pain go away, I realized that I had effectively become blind.
I panicked at the thought of being unable to use my computing gadgets to tap into the information, interaction, and entertainment I normally consume all day. I wished I knew how to use VoiceOver, which has allowed visually impaired people to control Macs and iOS devices for years. But I didn’t feel up to mastering its complexities in my impaired state. I needed something faster and simpler, so I turned to my smart speakers with their built-in voice-controlled assistants.
At the time, my collection of these voice-controlled gadgets included a pair of HomePod mini speakers with Siri, a handful of Google’s Home and Nest speakers using Google Assistant, and an assortment of Amazon Echo and Meta Portal devices using Amazon’s Alexa. I have since subbed out the HomePod minis with two of Apple’s recently released full-sized second-generation HomePods.
My use of the speakers until then had tended to be simple: setting timers, adjusting the thermostat, turning lights on and off, playing music via AirPlay, and so on.
Reasoning that these are screenless computers of a sort, I wondered to what extent they might fill in for the screened gadgetry I could no longer use. Could they be my HAL 9000 minus the psychosis, my Samantha from the film Her, my J.A.R.V.I.S. from Marvel’s Cinematic Universe?
Spoiler alert: smart speakers are still pretty dumb.
I have organized this story into categories—podcasts, news, voice calls, music, books, and Web search—each denoting a computer use that I tried to replicate on my smart speakers, with varying degrees of success.
The Web search category was particularly important because I was desperate for detailed information about my ailment, and my family had only so much time and patience to search for me. The voice assistants were lousy substitutes for reasons I’ll explain.
All this happened months before AI chatbots like OpenAI’s ChatGPT became widely available, enabling human-like interactions that could unearth information more exhaustively, if not always accurately. The implications for smart speakers are clear.
When putting my smart speakers through their paces, I was keen to learn how my HomePods compared to the competition. They excelled at some things but fared poorly in other categories.
Amazon’s and Google’s smart speakers differ from HomePods in a key respect: they work autonomously to a larger extent, connecting directly to the Internet, while Apple’s speakers tend to function more as extensions of other Apple hardware, primarily the iPhone. This proved to be a crucial distinction at times.
Amazon and Google also differ in providing smart displays—smart speakers that have screens grafted onto them. Apple offers nothing of the sort, which is a shame because my Google Nest Hub Max and Amazon Echo Show 8 proved helpful in ways their screenless cousins could not.
What follows is similar to an earlier article of mine about smart speakers’ non-music uses (see “Beyond Music: Comparing the HomePod to Amazon Echo and Google Home,” 15 March 2018), but with updated details, and geared more specifically to my medical situation.
I am struck, a half-decade later, at how little smart speakers have changed.
Podcast aggregator apps on my iPhone, including Overcast and Pocket Casts, became difficult to use when I couldn’t see what I was doing. Happily, my smart speakers became more than adequate substitutes.
My needs were not complicated. With a list of my favorite podcasts in my head, I issued commands such as “Play MacBreak Weekly” to my Amazon and Google speakers. Such a request reliably launched the latest episode of Leo Laporte’s Apple-centric show via the Amazon Music and Google Podcasts services. Amazon lets you designate other services as playback defaults, such as TuneIn, iHeart, and Apple Podcasts; Google lets you assign Spotify to that task.
My HomePod experience was similar, but I often had to add to my voice command a reference to the podcast’s point of origin—“Play Planet Money from Apple Podcasts”—lest it look in the wrong place and give me a music track instead of a podcast episode.
With time to burn, I wanted to dig into podcasts’ back catalogs. The “Play previous episode” command on the Amazon and Google devices made this possible—to a degree. I could listen to the ten most recent episodes of any podcast via Amazon Music or Google Podcasts. This was a great way to catch up with my favorite shows.
Not so on the HomePod. “Sorry, I couldn’t go back,” Siri responded when I asked for older episodes. Worse, my requests sometimes triggered a “Sorry, there is a problem with Apple Podcasts” error. (To be fair, all the speakers regularly returned cryptic error messages for various reasons.)
I could access my podcasts on my HomePod in another way: by using Siri shortcuts built into podcast apps on my iPhone. In Overcast, saying “Overcast favorites” whisked me to the top of my preferred podcast episode list. Likewise, in Pocket Casts, a few words got me to any podcast or podcast filter, such as Favorites or In Progress.
However, I had committed few of these Siri shortcuts to memory when my curtain fell, so I did not lean heavily on this method. Besides, my HomePods sometimes insisted that I log into my iPhone before it would execute a command, which defeated the purpose.
I’m addicted to the news, which I often consume in audio format via National Public Radio and its Minnesota Public Radio affiliate. Much of that content is packaged as podcasts, which I just discussed.
But sometimes, I wanted to listen to the live audio feed of KNOW (MPR’s news channel). This took some experimentation with phrases such as “Play Minnesota Public Radio,” “Play MPR News,” and “Play KNOW” because not every speaker understood these word sequences in the same way. For instance, Google sometimes took “Play Minnesota Public Radio” to mean MPR’s popular-music station, The Current.
Regularly refreshed news briefings were also readily available. On my HomePod, “Give me the news” coughed up the latest hourly recording of NPR News Now. For those not into public radio, Apple provides other news-summary sources such as CNN and Fox News. Use a “Switch to…” voice command to set a news source default.
Amazon and Google users can get fancier by assembling a news briefing from various news sources (such as CNN, PBS, and the New York Times) that play in sequence. You can set up My News Channel in the Alexa app and News Sources in the Google Home app. In each case, a phrase such as “Give me the news” jumpstarts a briefing. Fortunately, I had set these up before my bicycle accident occurred.
Sadly, there isn’t anything quite like this in the Apple world—at least not without an irksome podcast “station” setup I describe in “Beyond Music: Comparing the HomePod to Amazon Echo and Google Home.”
My ultimate news goal during my convalescence was getting NPR One to work on one or more of my smart speakers, something I’d never needed to do before. “NPR One” refers to a customizable service (in mobile and Web app form) that strings stories from dozens of public-radio programs and podcasts into a never-ending stream that adapts to user preferences. For NPR junkies like me, NPR One is news meth.
I was disappointed that the service is native only to Amazon speakers, once users have installed the NPR Alexa skill (which I did with help from my family). It’s not called NPR One in this context, but it works just like NPR One when I trigger it with an “Alexa, play NPR” command.
While NPR has expended great effort in adapting itself to smart speakers, it seems to have been hampered on Apple and Google speakers where NPR One is concerned. In the Apple universe, NPR One works on Apple TV and CarPlay, but not on HomePod. In the Google world, it supports Android Wear (Google’s smartwatch OS) and Android Auto, but not my Google Home Mini or Nest Audio speakers.
Apple came through for me when it came to making and receiving phone calls. This was unsurprising since I have long used my HomePods as a speakerphone to make and receive calls.
In this regard, my HomePods were among my greatest sources of comfort since—distraught from my isolation—I craved nothing more than the voices of my friends and family. Their numbers in Contacts were just a quick “Call” command away.
HomePods don’t execute such calls on their own, of course. A nearby iPhone facilitates calls, and FaceTime calling requires the presence of an iPad or an iPhone. But this setup proved so trouble-free that I never bothered with alternatives.
Amazon and Google also allow voice calling via their speakers, but the implementations are not as elegant and have limitations.
Amazon users can define up to ten mobile or landline numbers to call via Echo hardware. As the calls are placed, numbers are added to an Alexa-to-Phone list that can be viewed and edited in the Alexa app. Some kinds of numbers don’t work, including those outside the US, UK, Canada, and Mexico.
Amazon also allows one Echo speaker to call another—or a phone with the Alex app—directly. More recently, Amazon has permitted AT&T, T-Mobile, and Verizon users to associate their cellular numbers with their Alexa accounts (via related skills) to make and receive hands-free calls. This approach roughly replicates HomePod voice calling.
Google users can make calls using their speakers in a few ways. One method, carrier calling, piggybacks on a Google Voice or Google Fi account. The other, Google-supported calling, requires no such account. Only Canada and US calls (excluding US territories) are supported.
Google’s Meet messaging service also enables calling among Google speakers, along with computers and mobile devices.
When looking for a painless way to get music, I was delighted that, just months earlier, I had signed up for the Siri-only version of Apple Music (see “Apple Music Voice Plan Is a Bargain If You’re OK Using Siri,” 3 January 2022). It fit my situation so beautifully I did not bother much with alternatives.
As I noted in my review:
Since its release, I’ve been using the $4.99-per-month Apple Music Voice Plan and have found Siri to be surprisingly competent in fielding my music requests … Voice Plan has persuaded me that a Siri-centric music service could be a decent choice for certain kinds of people. To my surprise, I’ve enjoyed using Apple Music via Siri … and I may even decide to stick with it.
I now want to laugh at that “certain kinds of people” phrase. Blinded by photophobia, I’d become such a person, even though I had not meant anything medical when I wrote those words.
I did give my non-Apple speakers occasional workouts. I am a Pandora subscriber and have long designated it as the default for music playback on my Amazon and Google speakers instead of Amazon Music and YouTube Music. I was grateful that this was ready to go as I convalesced.
Apple Music is also available on the Amazon and Google speakers, but only the full version of the service that costs $10.99 per month, not the $4.99-per-month Voice Plan. Other music services work on either of the platforms, including Deezer, iHeart, and Spotify. Amazon provides nine options; Google has six.
HomePods support fewer music services, notably Pandora and Deezer, either of which can be designated as the music default instead of Apple Music.
Until my photophobia, I seldom listened to audiobooks, preferring print and ebook formats. But I do own audiobooks, and I became eager for them when they were my only practical reading options.
Apple disappointed me. It’s impossible to queue up audiobooks in an Apple Books library on a HomePod via a voice command. Someone without my temporary disability could have used AirPlay on an iPhone, or other avenues that were not practical for me. This limitation applies to audiobooks purchased using Apple Books, as well as audiobooks synced over from my Amazon Audible account.
I got so frantic to read my Apple Books copy of The Road, Cormac McCarthy’s dystopian novel, that I resorted to blindly tapping at the Books app on my iPhone, with limited success. I did appreciate that my Siri request for “Play The Road” pulled up thrilling Gareth Coker music from the Halo Infinite soundtrack (a throwback to the 1999 Steve Jobs keynote when the Apple CEO revealed the Halo video game to the world), but it wasn’t what I wanted.
Amazon, by comparison, made audiobook playback a cinch. When I asked Alexa to “Read Sailing to Byzantium,” Robert Silverberg’s sci-fi novel fired right up. A while later, uttering “Continue my book” picked up where I had left off. Many more Audible with Alexa commands are available.
Playing Google audiobooks worked similarly but was less reliable. As with Amazon, my Google book library resides in the cloud and is accessible with intuitive commands—such as “Read Rise of the Dragons: Kings and Sorcerers” to get started with the intriguing fantasy title and “Read my book” to continue whatever book I was reading. But Google Assistant couldn’t find some of the titles in my library, and sometimes just told me, “Sorry, I didn’t understand,” when I asked for one of my books.
I regretfully had to sideline my usual source of ebooks and audiobooks, the Libby app, which functions as a borrowing portal to my public libraries (see “Skip the Library Trip, Borrow Ebooks and More at Home,” 14 September 2020). I could discern no obvious way (other than VoiceOver) to manipulate it in my sightless condition. Libby does not appear to have pre-baked Siri shortcuts, for instance. That’s a shame because it would have provided an exponentially larger audiobook selection.
Access to Google search was by far the most critical category because, marooned in darkness, I was bursting with questions about my condition but lacked my usual ability to engage in deep research on my Mac or iPad. I would have been asking too much of my family to be my Google proxies for hours at a time.
So I lobbed at my speakers question after question about concussions and their side effects, which, in my case, also included brain fog, balance issues, and uncontrollable twitching all over my body. These were all signs of post-concussion syndrome, which is what I experienced when I did not recover from my concussion within a few weeks, as most people do.
It quickly became apparent that the speakers would be little help. I needed the responses to be read out to me, which often happened, but I’d get only unsatisfying snippets of info from Wikipedia or medical websites. I had no way to follow up—requests for additional information were met with bafflement.
Frequently, my HomePods wouldn’t speak the results at all but instead instructed me to replicate my queries on my iPhone so I could read Web results—clearly impossible for me and utterly frustrating.
The Google speakers were the most helpful because they provided two-stage replies. They would answer my question and then suggest a related question that Google had often fielded from other users. Would I like to hear that? If I said yes, they’d give me the answer.
For instance, if I asked, “What is photophobia?” Google Assistant would reply and then helpfully volunteer that others had inquired, “Is there a cure for photophobia?”
This was fine, but again, I was being spoon-fed morsels of information with no way to go deeper or select or vet the sources.
I had a way to circumvent this limitation somewhat. Because Google’s Nest Hub Max has a screen and plays YouTube content, I could request videos about my research topic but just listen to the audio. Asking the device to “Play videos about post-concussion syndrome,” I got a bit of a crash course about the disorder. And I could endlessly go from video to video with a “Hey Google, next” command.
My Amazon devices with screens, the Echo Show 8 and Meta Portal, were much less helpful. On the Echo Show, I could ask Alexa for a video about a topic, but that bounced me via a Bing search to the DailyMotion website, where I had no voice navigation options and content that tended to stray off-topic after the initial video. On the Meta Portals, none of that worked at all.
But even Google’s Nest Hub Max proved unsatisfying because it forced me into sequential video voyages over which I had little control, sometimes subjecting me to a dozen or more unhelpful videos before I arrived at a useful one. On the Mac, I would just study the screen of video thumbnails and zero in on the most useful clips. Smart, these speakers are not.
I needed an assistant with which I could have a human-like conversation to guide it through complex avenues of inquiry. I needed something like the Knowledge Navigator, an iPad-like device with a built-in assistant that then-Apple CEO John Sculley imagined in the late 1980s and demoed in concept videos. One showed the assistant as a bow-tied butler doing a professor’s bidding in varied and intricate ways.
Unfortunately, my accident and recovery occurred several months ahead of the artificial intelligence craze that is now sweeping the tech world courtesy of chatbots such as OpenAI’s ChatGPT, Microsoft’s Bing (which uses a version of ChatGPT), and Google’s Bard.
These chatbots differ from conventional search engines because they understand and generate natural language while acting on a dizzying range of complex requests based on the user’s context and intent. They can compose poetry, debug code, create vacation itineraries, suggest retail purchases, furnish recipes based on the available ingredients, summarize videos or text, translate languages, and more (see “ChatGPT: The Future of AI Is Here,” 9 December 2022).
When I give Bing commands like, “Write me a step-by-step medical plan to recover from post-concussion syndrome,” it returns detailed responses that would have delighted me when I was convalescing. What’s more, I can speak the requests because the iOS app version of Bing takes voice queries by default and reads back the answers. This would have been a miracle during my months of darkness.
You can see where I am going here. The assistants built into smart speakers need to work like ChatGPT when appropriate. This would have made my experiences with my speakers exponentially more useful. I would have been able to engage in open-ended conversations, with one query leading to another and another and another, much like ChatGPT does now.
Instead of the unsatisfying snippets of medical information the speakers fed me during my dark time, I could have extracted the contents of entire websites once I figured out the voice prompts. I might have become a concussion expert (assuming the data ChatGPT fed me was accurate, which it famously often is not). I would have been spared the anxiety of ignorance. Knowing what I do now, I might have healed sooner.
For example, I had operated on the erroneous assumption that the intense pain caused by my photophobia meant that light could physically damage me and that the only way to forestall such injury was to cower behind my sleep mask 24/7. A traumatic brain injury specialist eventually revealed to me that the pain did not equal damage, and said that the only way I’d get better was by intentionally exposing myself to light in increments until it no longer bothered me. When I did that, I improved quickly. If only I had that information earlier!
An AI chatbot might have made a difference in every category I’ve detailed, if properly integrated with Siri and the broader iOS and macOS ecosystems. It would have had no trouble helping me plumb the depths of podcasts’ back catalogs, play KNOW’s live feed no matter how I phrased the request, set up NPR One for me, play audiobooks I had purchased in Apple Books, and so on.
Enterprising iOS users are making this happen, up to a point, but Apple needs to step up with an official implementation.
I’m now back to using my speakers for the simple stuff like setting up tea timers, asking for the weather forecast, playing podcasts from my phone via AirPlay, adjusting the temperature on my Nest thermostat, and so on. I had tried to push the smart speakers to become more significant parts of my life, but there now seems little point to that until they go back to school for some artificial intelligence—they certainly don’t have much now.