Skip to content
Thoughtful, detailed coverage of everything Apple for 34 years
and the TidBITS Content Network for Apple professionals
Show excerpts

1654: Urgent OS security updates, upgrading to macOS 13 Ventura, using smart speakers while temporarily blind

To address two severe security vulnerabilities being exploited in the wild, Apple has released updates to its current iPhone, iPad, and Mac operating systems, plus the previous versions of iOS and iPadOS and the last two versions of macOS. Update all your devices right away. Continuing on the updating theme, Adam Engst shares his experience performing a Level 2 clean install of macOS 13 Ventura on his primary Mac in an attempt to eliminate recalcitrant problems. Finally, Julio Ojeda-Zapata rejoins us with a story about how he was forced to rely on smart speakers from Apple, Amazon, and Google during a months-long recovery from post-concussion syndrome. Notable Mac app releases this week include BBEdit 14.6.5, Fantastical 3.7.9, Safari 16.4.1, and macOS Monterey 12.6.5 and Big Sur 11.7.6.

Adam Engst 13 comments

iOS 16.4.1, iPadOS 16.4.1, and macOS 13.3.1 Address Serious Security Vulnerabilities, Fix Bugs

Just 11 days after releasing a spate of updates to its operating systems (see “Apple Releases iOS 16.4, iPadOS 16.4, macOS 13.3 Ventura, watchOS 9.4, tvOS 16.4, and HomePod Software 16.4,” 27 March 2023), Apple has pushed out quick updates to iOS 16.4.1, iPadOS 16.4.1, and macOS Ventura 13.3.1 with a smattering of changes.

Why the quick release? The security notes say that the updates block two vulnerabilities Apple says are being actively exploited in the wild. One vulnerability would allow an app to execute arbitrary code with kernel privileges; the other could allow maliciously crafted Web content to execute arbitrary code.

I’m reading between the lines here, but the fact that Apple credits “Clément Lecigne of Google’s Threat Analysis Group and Donncha Ó Cearbhaill of Amnesty International’s Security Lab” suggests to me that these vulnerabilities might have been leveraged by governments using the NSO Group’s Pegasus or similar software to target activists or journalists (see “Apple Lawsuit Goes After Spyware Firm NSO Group,” 24 November 2021).

iOS 16.4.1 release notesApple took the opportunity to fold in a few bug fixes as well. All three operating systems now properly show the skin tone variations for the pushing hands 🫸 🫷 emoji. iOS 16.4.1 and iPadOS 16.4.1 also address a problem that caused Siri to fail to respond in some cases, and macOS 13.3.1 resolves an issue that could prevent you from using Auto Unlock with your Apple Watch.

If my supposition about activists being targeted is correct, the exploits may be aimed mostly at high-value targets. Nevertheless, I recommend that you install these updates right away. It’s never a good idea to stick with operating system versions known to be vulnerable to active exploits. Plus, the Siri and Auto Unlock fixes are sufficiently welcome on their own.

As I predicted in the original version of this article, Apple subsequently released additional updates for its older operating systems: see “Safari 16.4.1” (7 April 2023), “iOS 15.7.5 and iPadOS 15.7.5 Address Serious Security Vulnerabilities” (10 April 2023), and “macOS Monterey 12.6.5 and Big Sur 11.7.6” (10 April 2023). Update everything you have.

Adam Engst No comments

iOS 15.7.5 and iPadOS 15.7.5 Address Serious Security Vulnerabilities

There’s nothing new to say here, but as I predicted, Apple has now released iOS 15.7.5 and iPadOS 15.7.5 to address the two security vulnerabilities discussed in “iOS 16.4.1, iPadOS 16.4.1, and macOS 13.3.1 Address Serious Security Vulnerabilities, Fix Bugs” (7 April 2023). Both are actively being exploited in the wild, so I recommend updating older devices that can’t run iOS 16 right away. If iOS 16 is an option for your device, you’ll have to upgrade to version 16.4.1 instead.

Adam Engst 46 comments

Level 2 Clean Install of Ventura Solves Deep-Rooted Problems

TidBITS readers have recently asked me a few times if I think macOS 13 Ventura is mature enough to install on their Macs. My short answer was, “Yes, it’s fine,” because I have been running Ventura on my M1 MacBook Air since the beta last year and have experienced no problems. The longer answer was, “But I still haven’t upgraded my iMac, and once I do, I’ll write about it.”

You might wonder why I don’t keep my Macs in sync all the time. Even when a new version of macOS is working well, I like to keep one of my Macs on the previous release until I feel confident recommending the upgrade to everyone. Having the previous release available helps me compare behaviors or interfaces between the two and see if bugs have been fixed or introduced. (We won’t speak here of the abomination that is Ventura’s System Settings; it’s not a reason to avoid upgrading, but it is undeniably awful.)

So if you’ve been waiting for us to give the go-ahead, I encourage you to upgrade when convenient. As always, I recommend Joe Kissell’s Take Control of Ventura for upgrade help. Now here’s why it took me so long.

Kernel Panics and Boot Authentication Failures

I wasn’t been sticking with macOS 12 Monterey on my 2020 27-inch iMac because of concerns about Ventura reliability or app incompatibility. Instead, I had put off the upgrade because I wanted to perform a time-consuming clean install that I hoped would resolve two long-standing problems.

First, and most notable, was a series of kernel panics that started in mid-2021 in macOS 11 Big Sur and persisted through Monterey. My iMac sometimes panicked twice a day; more commonly, a week or two would pass between panics. Several times it even worked perfectly for 2 to 4 months before succumbing to another spate of panics. (I know all this because I saved 47 panic reports manually in my BBEdit Notes window. macOS used to generate panic logs that I could access in Console. Those logs may still be created, but I can’t find them.) The kernel panics almost never happened when I was sitting in front of the Mac, and once I restarted, macOS restored the state of the Mac to where I was before the panic. While extremely troubling, they weren’t all that disruptive.

The second problem was less frequent but equally inexplicable. Whenever I installed a minor macOS update, the first boot afterward wouldn’t have access to my keychain for some reason, so that when all my usual apps launched, I was plagued by so many authentication requests (20? 50?) that I had to fight off the dialogs and restart again. On that next restart and every subsequent one, everything was fine. (Fellow TidBITS writer Glenn Fleishman had a similar ongoing problem with privacy preferences after restarts. He wrote up a nuclear solution for fixing it.)

This problem has bedeviled me since at least macOS 10.13 High Sierra, and persisted through macOS 12 Monterey. I assumed it was software cruft because my workaround was to switch to an otherwise unused admin account before updating, suggesting that the problem was related to my account. However, I could never root out anything that helped, like a corrupted preference file.

Could a clean install eliminate these annoyances?

Levels of Clean Install

When I say “clean install,” I mean something more significant than the term generally implies. A clean install usually refers to reformatting the Mac’s boot drive and installing a fresh copy of macOS before restoring apps, settings, and data from a backup. Let’s call that a “Level 1 clean install.”

It’s no longer particularly helpful. Since Big Sur, your Mac’s drive is split into two parts, even though it still presents as a single volume in the Finder. All your data lives on one read/write volume, while all system files are locked on a separate, read-only, cryptographically signed volume called the Sealed System Volume. For security reasons, Macs don’t boot directly from that volume but instead from a snapshot of the system. Since every component is signed, any file being modified or corrupted in the smallest way—as little as a single bit flipped—due to a failure of the underlying storage will cause the seal to be invalid, and macOS will refuse to boot. The same would be true if someone developed malware that could unfathomably pierce the locked volume.

In other words, if anything is wrong with your installation of macOS, your Mac won’t boot at all. At least that’s what Apple says—I’ve never actually seen a Mac refuse to boot because of such a problem.

My iMac never refused to boot, and it installed the upgrade from Big Sur to Monterey and numerous minor updates within each major version without complaint. Resetting NVRAM, running hardware diagnostics, unplugging USB and Thunderbolt devices, and anything else I could think of made no difference or gave any hints toward a solution. My next step was a Level 2 clean install.

Given my role in the Mac world, I install a vast amount of software. In my Applications folder in Monterey, I had 236 items. Some dated back to early 2017, the last time I performed a Level 2 clean install. I don’t even recognize all the apps’ names! The problem is that some of these apps installed kernel extensions and other system-level components over the years, and while Apple’s macOS installer tries to disable crufty old bits that could cause problems, it’s not entirely effective.

Here’s how I perform a Level 2 clean install:

  • I make several backups and verify that they’re good.
  • To get started, I boot into macOS Recovery and erase the boot drive.
  • Next, I install macOS, which takes a very long time.
  • When restoring from my backup in Setup Assistant, I select the contents of my home folder but avoid restoring applications. (These choices are controlled by checkboxes in the process when the assistant asks what you want to restore.)
  • When restoration is complete, I force myself to download every app and utility and install a fresh copy.

In the extremely unusual situation of an app I still use no longer being available for download, I can restore it from my backup, but I try hard to replace obsolete apps.

Most of the time, apps access their licenses and settings from my user account, so I can pick up using them where I left off. Installing each app fresh is tedious and cuts into my productivity for a week or two, but I appreciate the feeling of starting anew.

Is there a Level 3 clean install? Yes, but it would be a major pain, and I’ve never done one. For a Level 3 clean install, you would erase the boot drive, install macOS, and set up a new account. You would then manually copy just your data—no settings—into your new home folder. The hard part comes next. You must enter registration codes and reconfigure each app’s settings from scratch. For some apps, you’ll also have to sort through your home folder’s Library folder to find subfolders that contain essential data—like certain items in Application Support and your Mail folder. Don’t attempt a Level 3 clean install unless a Level 2 clean install hasn’t helped and you’re left fighting problems that occur in only your account.

Did It Work?

Although it may take months before I know for certain, the Level 2 clean install has apparently stopped the kernel panics. My iMac hasn’t suffered a single panic since I upgraded on 3 March 2023, while I had seven panics in the previous month.

The releases of macOS 13.3 and 13.3.1 also confirmed that a Level 2 clean install resolved the problem with authentication in the first boot after installing a minor update. Both of those updates installed fine, although I had to unlock my Time Machine drive and log in to Setapp after the macOS 13.3 update. No extra authentication requests appeared after the macOS 13.3.1 update.

Viva Ventura! But more to the point, you might need a Level 2 clean install to resolve some tricky problems, and others might succumb only to a Level 3 clean install. If you’re battling such recalcitrant gremlins, try deeper cleaning.

Julio Ojeda-Zapata 8 comments

Using Smart Speakers While Temporarily Blind

Computer screens of all sizes have been a constant in our lives for decades. But what if you suddenly couldn’t use any of them? How would you maintain a semblance of your connected lifestyle? Could you make do with just sound?

I faced this dilemma in August 2022 after I lost control of my bicycle in downtown St. Paul, went flying, and slammed my helmeted head onto the asphalt with a sickening crunch. I suffered a nasty concussion. Soon afterward, my world began to go dark.

I was in the extreme throes of photophobia, a kind of light intolerance that caused ambient illumination to feel like knives being shoved into my eye sockets. This turned out to be a common concussion side effect, one of several I suffered over several months. As my condition worsened, and I started wearing a sleep mask all the time to make the pain go away, I realized that I had effectively become blind.

I panicked at the thought of being unable to use my computing gadgets to tap into the information, interaction, and entertainment I normally consume all day. I wished I knew how to use VoiceOver, which has allowed visually impaired people to control Macs and iOS devices for years. But I didn’t feel up to mastering its complexities in my impaired state. I needed something faster and simpler, so I turned to my smart speakers with their built-in voice-controlled assistants.

At the time, my collection of these voice-controlled gadgets included a pair of HomePod mini speakers with Siri, a handful of Google’s Home and Nest speakers using Google Assistant, and an assortment of Amazon Echo and Meta Portal devices using Amazon’s Alexa. I have since subbed out the HomePod minis with two of Apple’s recently released full-sized second-generation HomePods.

Julio's collection of smart speakers
Smart displays, from left to right: Meta Portal Go, Google Nest Hub Max, Meta Portal Plus, Amazon Echo Show 8. Smart speakers, from left to right: Apple HomePod, HomePod mini, Google Home Mini, Amazon Echo Dot, HomePod mini, Google Nest Audio, HomePod.

My use of the speakers until then had tended to be simple: setting timers, adjusting the thermostat, turning lights on and off, playing music via AirPlay, and so on.

Reasoning that these are screenless computers of a sort, I wondered to what extent they might fill in for the screened gadgetry I could no longer use. Could they be my HAL 9000 minus the psychosis, my Samantha from the film Her, my J.A.R.V.I.S. from Marvel’s Cinematic Universe?

Spoiler alert: smart speakers are still pretty dumb.

I have organized this story into categories—podcasts, news, voice calls, music, books, and Web search—each denoting a computer use that I tried to replicate on my smart speakers, with varying degrees of success.

The Web search category was particularly important because I was desperate for detailed information about my ailment, and my family had only so much time and patience to search for me. The voice assistants were lousy substitutes for reasons I’ll explain.

All this happened months before AI chatbots like OpenAI’s ChatGPT became widely available, enabling human-like interactions that could unearth information more exhaustively, if not always accurately. The implications for smart speakers are clear.

When putting my smart speakers through their paces, I was keen to learn how my HomePods compared to the competition. They excelled at some things but fared poorly in other categories.

Amazon’s and Google’s smart speakers differ from HomePods in a key respect: they work autonomously to a larger extent, connecting directly to the Internet, while Apple’s speakers tend to function more as extensions of other Apple hardware, primarily the iPhone. This proved to be a crucial distinction at times.

Amazon and Google also differ in providing smart displays—smart speakers that have screens grafted onto them. Apple offers nothing of the sort, which is a shame because my Google Nest Hub Max and Amazon Echo Show 8 proved helpful in ways their screenless cousins could not.

What follows is similar to an earlier article of mine about smart speakers’ non-music uses (see “Beyond Music: Comparing the HomePod to Amazon Echo and Google Home,” 15 March 2018), but with updated details, and geared more specifically to my medical situation.

I am struck, a half-decade later, at how little smart speakers have changed.

Podcasts

Podcast aggregator apps on my iPhone, including Overcast and Pocket Casts, became difficult to use when I couldn’t see what I was doing. Happily, my smart speakers became more than adequate substitutes.

My needs were not complicated. With a list of my favorite podcasts in my head, I issued commands such as “Play MacBreak Weekly” to my Amazon and Google speakers. Such a request reliably launched the latest episode of Leo Laporte’s Apple-centric show via the Amazon Music and Google Podcasts services. Amazon lets you designate other services as playback defaults, such as TuneIn, iHeart, and Apple Podcasts; Google lets you assign Spotify to that task.

My HomePod experience was similar, but I often had to add to my voice command a reference to the podcast’s point of origin—“Play Planet Money from Apple Podcasts”—lest it look in the wrong place and give me a music track instead of a podcast episode.

With time to burn, I wanted to dig into podcasts’ back catalogs. The “Play previous episode” command on the Amazon and Google devices made this possible—to a degree. I could listen to the ten most recent episodes of any podcast via Amazon Music or Google Podcasts. This was a great way to catch up with my favorite shows.

Not so on the HomePod. “Sorry, I couldn’t go back,” Siri responded when I asked for older episodes. Worse, my requests sometimes triggered a “Sorry, there is a problem with Apple Podcasts” error. (To be fair, all the speakers regularly returned cryptic error messages for various reasons.)

I could access my podcasts on my HomePod in another way: by using Siri shortcuts built into podcast apps on my iPhone. In Overcast, saying “Overcast favorites” whisked me to the top of my preferred podcast episode list. Likewise, in Pocket Casts, a few words got me to any podcast or podcast filter, such as Favorites or In Progress.

However, I had committed few of these Siri shortcuts to memory when my curtain fell, so I did not lean heavily on this method. Besides, my HomePods sometimes insisted that I log into my iPhone before it would execute a command, which defeated the purpose.

News

I’m addicted to the news, which I often consume in audio format via National Public Radio and its Minnesota Public Radio affiliate. Much of that content is packaged as podcasts, which I just discussed.

But sometimes, I wanted to listen to the live audio feed of KNOW (MPR’s news channel). This took some experimentation with phrases such as “Play Minnesota Public Radio,” “Play MPR News,” and “Play KNOW” because not every speaker understood these word sequences in the same way. For instance, Google sometimes took “Play Minnesota Public Radio” to mean MPR’s popular-music station, The Current.

Regularly refreshed news briefings were also readily available. On my HomePod, “Give me the news” coughed up the latest hourly recording of NPR News Now. For those not into public radio, Apple provides other news-summary sources such as CNN and Fox News. Use a “Switch to…” voice command to set a news source default.

Amazon and Google users can get fancier by assembling a news briefing from various news sources (such as CNN, PBS, and the New York Times) that play in sequence. You can set up My News Channel in the Alexa app and News Sources in the Google Home app. In each case, a phrase such as “Give me the news” jumpstarts a briefing. Fortunately, I had set these up before my bicycle accident occurred.

Sadly, there isn’t anything quite like this in the Apple world—at least not without an irksome podcast “station” setup I describe in “Beyond Music: Comparing the HomePod to Amazon Echo and Google Home.

My ultimate news goal during my convalescence was getting NPR One to work on one or more of my smart speakers, something I’d never needed to do before. “NPR One” refers to a customizable service (in mobile and Web app form) that strings stories from dozens of public-radio programs and podcasts into a never-ending stream that adapts to user preferences. For NPR junkies like me, NPR One is news meth.

I was disappointed that the service is native only to Amazon speakers, once users have installed the NPR Alexa skill (which I did with help from my family). It’s not called NPR One in this context, but it works just like NPR One when I trigger it with an “Alexa, play NPR” command.

While NPR has expended great effort in adapting itself to smart speakers, it seems to have been hampered on Apple and Google speakers where NPR One is concerned. In the Apple universe, NPR One works on Apple TV and CarPlay, but not on HomePod. In the Google world, it supports Android Wear (Google’s smartwatch OS) and Android Auto, but not my Google Home Mini or Nest Audio speakers.

Voice Calls

Apple came through for me when it came to making and receiving phone calls. This was unsurprising since I have long used my HomePods as a speakerphone to make and receive calls.

In this regard, my HomePods were among my greatest sources of comfort since—distraught from my isolation—I craved nothing more than the voices of my friends and family. Their numbers in Contacts were just a quick “Call” command away.

HomePods don’t execute such calls on their own, of course. A nearby iPhone facilitates calls, and FaceTime calling requires the presence of an iPad or an iPhone. But this setup proved so trouble-free that I never bothered with alternatives.

Amazon and Google also allow voice calling via their speakers, but the implementations are not as elegant and have limitations.

Amazon users can define up to ten mobile or landline numbers to call via Echo hardware. As the calls are placed, numbers are added to an Alexa-to-Phone list that can be viewed and edited in the Alexa app. Some kinds of numbers don’t work, including those outside the US, UK, Canada, and Mexico.

Amazon also allows one Echo speaker to call another—or a phone with the Alex app—directly. More recently, Amazon has permitted AT&T, T-Mobile, and Verizon users to associate their cellular numbers with their Alexa accounts (via related skills) to make and receive hands-free calls. This approach roughly replicates HomePod voice calling.

Google users can make calls using their speakers in a few ways. One method, carrier calling, piggybacks on a Google Voice or Google Fi account. The other, Google-supported calling, requires no such account. Only Canada and US calls (excluding US territories) are supported.

Google’s Meet messaging service also enables calling among Google speakers, along with computers and mobile devices.

Music

When looking for a painless way to get music, I was delighted that, just months earlier, I had signed up for the Siri-only version of Apple Music (see “Apple Music Voice Plan Is a Bargain If You’re OK Using Siri,” 3 January 2022). It fit my situation so beautifully I did not bother much with alternatives.

As I noted in my review:

Since its release, I’ve been using the $4.99-per-month Apple Music Voice Plan and have found Siri to be surprisingly competent in fielding my music requests … Voice Plan has persuaded me that a Siri-centric music service could be a decent choice for certain kinds of people. To my surprise, I’ve enjoyed using Apple Music via Siri … and I may even decide to stick with it.

I now want to laugh at that “certain kinds of people” phrase. Blinded by photophobia, I’d become such a person, even though I had not meant anything medical when I wrote those words.

I did give my non-Apple speakers occasional workouts. I am a Pandora subscriber and have long designated it as the default for music playback on my Amazon and Google speakers instead of Amazon Music and YouTube Music. I was grateful that this was ready to go as I convalesced.

Apple Music is also available on the Amazon and Google speakers, but only the full version of the service that costs $10.99 per month, not the $4.99-per-month Voice Plan. Other music services work on either of the platforms, including Deezer, iHeart, and Spotify. Amazon provides nine options; Google has six.

HomePods support fewer music services, notably Pandora and Deezer, either of which can be designated as the music default instead of Apple Music.

Books

Until my photophobia, I seldom listened to audiobooks, preferring print and ebook formats. But I do own audiobooks, and I became eager for them when they were my only practical reading options.

Apple disappointed me. It’s impossible to queue up audiobooks in an Apple Books library on a HomePod via a voice command. Someone without my temporary disability could have used AirPlay on an iPhone, or other avenues that were not practical for me. This limitation applies to audiobooks purchased using Apple Books, as well as audiobooks synced over from my Amazon Audible account.

I got so frantic to read my Apple Books copy of The Road, Cormac McCarthy’s dystopian novel, that I resorted to blindly tapping at the Books app on my iPhone, with limited success. I did appreciate that my Siri request for “Play The Road” pulled up thrilling Gareth Coker music from the Halo Infinite soundtrack (a throwback to the 1999 Steve Jobs keynote when the Apple CEO revealed the Halo video game to the world), but it wasn’t what I wanted.

Amazon, by comparison, made audiobook playback a cinch. When I asked Alexa to “Read Sailing to Byzantium,” Robert Silverberg’s sci-fi novel fired right up. A while later, uttering “Continue my book” picked up where I had left off. Many more Audible with Alexa commands are available.

Playing Google audiobooks worked similarly but was less reliable. As with Amazon, my Google book library resides in the cloud and is accessible with intuitive commands—such as “Read Rise of the Dragons: Kings and Sorcerers” to get started with the intriguing fantasy title and “Read my book” to continue whatever book I was reading. But Google Assistant couldn’t find some of the titles in my library, and sometimes just told me, “Sorry, I didn’t understand,” when I asked for one of my books.

I regretfully had to sideline my usual source of ebooks and audiobooks, the Libby app, which functions as a borrowing portal to my public libraries (see “Skip the Library Trip, Borrow Ebooks and More at Home,” 14 September 2020). I could discern no obvious way (other than VoiceOver) to manipulate it in my sightless condition. Libby does not appear to have pre-baked Siri shortcuts, for instance. That’s a shame because it would have provided an exponentially larger audiobook selection.

Web Search

Access to Google search was by far the most critical category because, marooned in darkness, I was bursting with questions about my condition but lacked my usual ability to engage in deep research on my Mac or iPad. I would have been asking too much of my family to be my Google proxies for hours at a time.

So I lobbed at my speakers question after question about concussions and their side effects, which, in my case, also included brain fog, balance issues, and uncontrollable twitching all over my body. These were all signs of post-concussion syndrome, which is what I experienced when I did not recover from my concussion within a few weeks, as most people do.

It quickly became apparent that the speakers would be little help. I needed the responses to be read out to me, which often happened, but I’d get only unsatisfying snippets of info from Wikipedia or medical websites. I had no way to follow up—requests for additional information were met with bafflement.

Frequently, my HomePods wouldn’t speak the results at all but instead instructed me to replicate my queries on my iPhone so I could read Web results—clearly impossible for me and utterly frustrating.

The Google speakers were the most helpful because they provided two-stage replies. They would answer my question and then suggest a related question that Google had often fielded from other users. Would I like to hear that? If I said yes, they’d give me the answer.

For instance, if I asked, “What is photophobia?” Google Assistant would reply and then helpfully volunteer that others had inquired, “Is there a cure for photophobia?”

This was fine, but again, I was being spoon-fed morsels of information with no way to go deeper or select or vet the sources.

I had a way to circumvent this limitation somewhat. Because Google’s Nest Hub Max has a screen and plays YouTube content, I could request videos about my research topic but just listen to the audio. Asking the device to “Play videos about post-concussion syndrome,” I got a bit of a crash course about the disorder. And I could endlessly go from video to video with a “Hey Google, next” command.

My Amazon devices with screens, the Echo Show 8 and Meta Portal, were much less helpful. On the Echo Show, I could ask Alexa for a video about a topic, but that bounced me via a Bing search to the DailyMotion website, where I had no voice navigation options and content that tended to stray off-topic after the initial video. On the Meta Portals, none of that worked at all.

But even Google’s Nest Hub Max proved unsatisfying because it forced me into sequential video voyages over which I had little control, sometimes subjecting me to a dozen or more unhelpful videos before I arrived at a useful one. On the Mac, I would just study the screen of video thumbnails and zero in on the most useful clips. Smart, these speakers are not.

I needed an assistant with which I could have a human-like conversation to guide it through complex avenues of inquiry. I needed something like the Knowledge Navigator, an iPad-like device with a built-in assistant that then-Apple CEO John Sculley imagined in the late 1980s and demoed in concept videos. One showed the assistant as a bow-tied butler doing a professor’s bidding in varied and intricate ways.

SiriGPT?

Unfortunately, my accident and recovery occurred several months ahead of the artificial intelligence craze that is now sweeping the tech world courtesy of chatbots such as OpenAI’s ChatGPT, Microsoft’s Bing (which uses a version of ChatGPT), and Google’s Bard.

These chatbots differ from conventional search engines because they understand and generate natural language while acting on a dizzying range of complex requests based on the user’s context and intent. They can compose poetry, debug code, create vacation itineraries, suggest retail purchases, furnish recipes based on the available ingredients, summarize videos or text, translate languages, and more (see “ChatGPT: The Future of AI Is Here,” 9 December 2022).

When I give Bing commands like, “Write me a step-by-step medical plan to recover from post-concussion syndrome,” it returns detailed responses that would have delighted me when I was convalescing. What’s more, I can speak the requests because the iOS app version of Bing takes voice queries by default and reads back the answers. This would have been a miracle during my months of darkness.

You can see where I am going here. The assistants built into smart speakers need to work like ChatGPT when appropriate. This would have made my experiences with my speakers exponentially more useful. I would have been able to engage in open-ended conversations, with one query leading to another and another and another, much like ChatGPT does now.

Instead of the unsatisfying snippets of medical information the speakers fed me during my dark time, I could have extracted the contents of entire websites once I figured out the voice prompts. I might have become a concussion expert (assuming the data ChatGPT fed me was accurate, which it famously often is not). I would have been spared the anxiety of ignorance. Knowing what I do now, I might have healed sooner.

For example, I had operated on the erroneous assumption that the intense pain caused by my photophobia meant that light could physically damage me and that the only way to forestall such injury was to cower behind my sleep mask 24/7. A traumatic brain injury specialist eventually revealed to me that the pain did not equal damage, and said that the only way I’d get better was by intentionally exposing myself to light in increments until it no longer bothered me. When I did that, I improved quickly. If only I had that information earlier!

An AI chatbot might have made a difference in every category I’ve detailed, if properly integrated with Siri and the broader iOS and macOS ecosystems. It would have had no trouble helping me plumb the depths of podcasts’ back catalogs, play KNOW’s live feed no matter how I phrased the request, set up NPR One for me, play audiobooks I had purchased in Apple Books,  and so on.

Dan Moren, writing for Macworld, makes a good case for such an improvement. So does Ed Hardy, writing for Cult of Mac, in an article titled, “Siri desperately needs some ChatGPT-like smarts.”

Enterprising iOS users are making this happen, up to a point, but Apple needs to step up with an official implementation.

I’m now back to using my speakers for the simple stuff like setting up tea timers, asking for the weather forecast, playing podcasts from my phone via AirPlay, adjusting the temperature on my Nest thermostat, and so on. I had tried to push the smart speakers to become more significant parts of my life, but there now seems little point to that until they go back to school for some artificial intelligence—they certainly don’t have much now.

Watchlist

BBEdit 14.6.5 Agen Schmitz No comments

BBEdit 14.6.5

Bare Bones has issued BBEdit 14.6.5, a maintenance update. The release fixes a bug that caused the Enter Full Screen command to appear twice at the end of the View menu, works around the misbehavior of rust-analyzer when generating completions in some situations, addresses a regression in which XML documents were inappropriately treated as HTML5, adds a mechanism for resolving conflicting Spaces behavior, resolves an issue where pattern errors were not correctly reported in codeless language modules, and works around a macOS file system API behavior that would prevent Finder tags from being shown for files on remote server volumes. ($49.99 new, free update, 23.6 MB, release notes, macOS 10.15.4+)

Fantastical 3.7.9 Agen Schmitz No comments

Fantastical 3.7.9

Flexibits has issued Fantastical 3.7.9, enabling Fantastical Openings to be configured to request or require a phone number when booking meetings. The calendar app now includes the amount of time left in the current event in the menu bar, adds support for detecting PracticeBetter meetings, improves time zone name display in Settings, ensures events ending at midnight in Day or Week view no longer visually extend slightly past midnight, resolves an issue that created an extra event when splitting a recurring event series, and resolves a crash associated with toggling a Google Account. ($56.99 annual subscription from Flexibits and the Mac App Store, free update, 68.4 MB, release notes, macOS 11+)

macOS Monterey 12.6.5 and Big Sur 11.7.6 Adam Engst 42 comments

macOS Monterey 12.6.5 and Big Sur 11.7.6

Apple has released macOS Monterey 12.6.5 and macOS Big Sur 11.7.6 to patch a security vulnerability that could allow arbitrary code execution with kernel privileges (for additional speculation, see “iOS 16.4.1, iPadOS 16.4.1, and macOS 13.3.1 Address Serious Security Vulnerabilities, Fix Bugs,” 7 April 2023). This vulnerability is being exploited in the wild, so we recommend updating immediately. If you notice any problems, please let us know in the comments. (Free, various sizes, macOS 12 and macOS 11)

ExtraBITS

Adam Engst 70 comments

GM Plans to Phase Out CarPlay in Future EVs

Reporting for Reuters, Joseph White writes:

General Motors plans to phase out widely-used Apple CarPlay and Android Auto technologies that allow drivers to bypass a vehicle’s infotainment systems, shifting instead to built-in infotainment systems developed with Google for future electric vehicles.

Well, that’s a terrible idea. Our current cars—a 2015 Subaru Outback and a 2015 Nissan Leaf—predate CarPlay, and their onboard infotainment systems are dreadful. We’re starting to look at replacing the gas-guzzling Outback with a new electric car, but CarPlay is now table stakes. The lack of CarPlay is a big reason we aren’t even considering a Tesla, along with the lack of local service options and a distaste for anything associated with Elon Musk.

Our situation aside, cars have much longer lifespans than phones, so building the smarts into the car—even with updates, which automakers do poorly—guarantees that the technology will become outdated. That will remain true until cars are much closer to full self-driving. Even then, I wouldn’t be surprised if Apple is pondering how to incorporate such capabilities into CarPlay for vehicles with the necessary sensors.

Adam Engst 5 comments

GQ on Tim Cook: Calm, Curious, and Not Normal

Zach Baron’s profile of Tim Cook at GQ is a fascinating read:

At a moment dense with pathological tech founders who log on daily to pontificate about the collective future of humankind, Cook does not log on all that much. He does not move fast and break things. His even calmness stands as an implicit rebuke to the chaos agents—Elon Musk, Mark Zuckerberg, and so on—who often get called to testify in Congress alongside Cook about the increasingly uncertain state of tech in this country. In clubby Silicon Valley, where it appears at times like people are battling to be the first in line on the venture capital–powered spaceship that will carry the Patagonia-clad elite away from the rest of us, Cook seems to side with the rest of us.

Tim Cook has been at Apple for 25 years and has been CEO since 2011. Despite the regular online refrains of “If Steve Jobs were still in charge…,” Apple under Cook has become the world’s most valuable company, released the Apple Watch and AirPods, and maintained focus on products, services, and values—and yes, making money. While some level of drama is inevitable for a company of Apple’s size and stature, Cook remains a calming influence. And yes, sending him email does often trigger a response from the Corporate Executive Relations team.