You may have heard about the new kid on the AI block: the engagingly conversational chatbot ChatGPT. Former Apple developer David Shayer explores just what ChatGPT is and, more importantly, what it’s not. Apple quietly released iOS 16.1.2 with better Crash Detection, and Microsoft announced that Office updates will now require macOS 11 Big Sur or later. Adam Engst rounds out the issue with a look at Apple’s forthcoming Advanced Data Protection for iCloud, which provides end-to-end encryption for more iCloud data types at the cost of eliminating Apple’s capability to recover your account if you forget all your passwords. Notable Mac app releases this week include 1Password 8.9.10, Pixelmator Pro 3.2.2, Fantastical 3.7.5, Boom 3D 1.4, EagleFiler 1.9.10, and Tinderbox 9.5.
This week, I have two quick things to share that don’t warrant an article: an explanation of why TidBITS members using Apple Pay may see incorrect credit card expiration dates and the news of Apple Music Sing, which fills me with dread.
Apple Pay Transactions Get Weird Expiration Dates
Our support wizard, Lauri Reinhardt, recently solved a minor mystery that had confused a few TidBITS members. When people who had used Apple Pay for their TidBITS memberships received email from our system about a failed renewal payment, the credit card expiration date listed was incorrect. Since one of the most likely reasons for a failed payment is an expired card, the incorrect date caused some consternation.
Lauri discovered that Apple Pay’s tokenization process prevents Stripe, our payment processor, from knowing the expiration date for the actual card. Instead, Stripe—and the Paid Memberships Pro plug-in that we use to manage TidBITS accounts—displays a seemingly random expiration date from the tokenized card. The problem is purely cosmetic and doesn’t block transactions in any way.
The practical upshot is that if you used Apple Pay to pay for your TidBITS membership and you see an incorrect expiration date associated with your card data in your account information, don’t worry about it unless you also get a notification that a transaction failed. In that case, try a different card or contact Lauri at [email protected] so she can help you figure out what to do. My apologies for any confusion, and thanks for supporting TidBITS!
This date quirk is the first downside of using Apple Pay instead of a straight credit card transaction that I’ve seen. In all other ways, Apple Pay’s tokenization of credit card data is a good thing because it significantly increases security.
Apple Music Adds Karaoke
As an entirely non-musical person, little fills me with more dread than karaoke. The occasional group rendition of “Happy Birthday” is as close as I get to singing. Happily, I’ve never found myself in a situation where karaoke was happening, much less where I would be expected to perform. I hope to keep it that way.
You can thus imagine my enthusiasm level for Apple Music Sing, a forthcoming feature that enables Apple Music subscribers to sing along with “tens of millions of the world’s most singable songs.” I’m sure lots of people will like Apple Music Sing when it becomes available later this month, and if you’re among them, I hope you have fun. I’ll continue to leave the singing to the professionals.
Last week, Apple released iOS 16.1.2 to improve compatibility with wireless carriers and provide Crash Detection optimizations for the iPhone 14 lineup. (Perhaps it no longer goes off in amusement parks; see “Roller Coasters Can Trigger Crash Detection in the iPhone 14 and Apple Watch,” 10 October 2022.) The update also contains security fixes that Apple promises to document soon, in case you’re looking for some light reading.
Should you upgrade? I can’t see any reason to delay, especially if you’re planning to be in a car anytime soon.
No other Apple operating systems were updated, though I wonder if the next update to watchOS 9 will include the Crash Detection optimizations.
(In the original version of this article, I asked if readers found coverage like this valuable, even when Apple notifies all users and when I don’t know anything more about the update than Apple publishes in the release notes. The poll results and subsequent comments strongly favored continuing, so I’ll keep calm and carry on.)
If you use Microsoft Office in macOS 10.15 Catalina, there’s yet another reason beyond the lack of Apple security updates to consider upgrading soon. Microsoft has announced that the company’s productivity suite will receive updates only if your Mac is running at least macOS 11 Big Sur. That applies to Office for Mac 2019, Office for Mac 2021, and Microsoft 365 (see “Microsoft Rebranding Office to Microsoft 365,” 17 October 2022). macOS 12 Monterey and macOS 13 Ventura are fully supported.
The October 2022 update (16.66) was the last build of Word, Excel, PowerPoint, Outlook, and OneNote for users running Catalina. If you continue to use Catalina, your Office apps will still work fine but won’t receive enhancements, bug fixes, or security updates. At some point, that may become a problem.
Upgrading is a pain, I know, but it’s a necessity of modern life. Some of the supporting details in “Why You Should Upgrade (On Your Own Terms)” (4 September 2015) may seem dated, but its key points are as valid now as they were 7 years ago.
Critics of cloud services often point—with a bit of finger-wagging—at the fact that cloud-stored data is theoretically vulnerable to being stolen by bad guys and handed over in response to government requests. That’s true even if the data is encrypted in transit to and from user devices and at rest on the company’s servers as long as the company maintains the encryption key necessary to decrypt the data.
The solution is conceptually simple—allow the user to generate and control all the encryption keys, a technique called end-to-end encryption. When that’s true, the data is unreadable to anyone other than the user. Risks of eavesdropping, theft, and government overreach are greatly reduced. However, the user then has the ultimate responsibility to remember and protect those keys, and if something goes wrong, there is absolutely no recourse—without the appropriate key, the data is effectively gone. And yes, that happens all the time, much as with the people who forget their crypto wallet password and lose millions in funny money.
For some time, Apple has provided end-to-end encryption for 14 of the 26 types of iCloud data, including Health data, Passwords and Keychain, Apple Card transactions, and more. You may not realize that you control the encryption keys for those data types because Apple has integrated them into the overall security infrastructure underpinning its devices, operating systems, and online services. That’s why it’s so important to remember your iPhone/iPad passcode and Mac login password.
But for the twelve remaining types of iCloud data—iCloud Mail, Contacts, Calendars, iCloud Backup, iCloud Drive, Photos, Notes, Reminders, Safari Bookmarks, Siri Shortcuts, Voice Memos, and Wallet passes—Apple stores the encryption keys in Hardware Security Modules in its data centers.
For iCloud Mail, Contacts, and Calendars, the need to interoperate with external email, contacts, and calendar systems requires that Apple manage the encryption keys. For the other nine, Apple’s control of the encryption keys enables the company to recover data for users who forget their passwords and have no fallback. (In such a situation, the end-to-end encrypted data types are lost.) But, of course, it also theoretically leaves that data vulnerable to hackers and law enforcement. iCloud Backup, which includes the encryption key for the otherwise end-to-end–encrypted Messages in iCloud, and Photos are the main data types to worry about in that list.
Very soon, those concerned about Apple holding their encryption keys will have some relief.
Enter Advanced Data Protection
Apple has announced Advanced Data Protection for iCloud, a major upgrade to iCloud security that provides end-to-end encryption for the nine data types previously mentioned. Advanced Data Protection is optional—you must explicitly enable it—because it prevents Apple from recovering your data. That seems like a reasonable tradeoff because the people who are the most likely to forget their passwords and need recovery help from Apple are probably less likely to have problems with hackers or law enforcement.
When you set up Advanced Data Protection, you’ll be prompted to set up alternate recovery methods, such as an account recovery contact or a printed recovery key, and you must set up at least one. Apple isn’t going to make it easy for you to lose your data.
Luckily, it’s not a one-way street. If you ever decide that you’d prefer Apple’s recovery help to end-to-end encryption of things like iCloud Backup, you can turn Advanced Data Protection off with no data loss.
There are several technical consequences associated with enabling Advanced Data Protection beyond it not protecting iCloud Mail, Contacts, and Calendars:
- iCloud.com Web access: Turning on Advanced Data Protection automatically disables Web access to data at iCloud.com due to Apple’s keys having been invalidated. You can turn Web access back on using a trusted device, but every visit to iCloud.com requires authorization from a trusted device, and the connection passes only normally accessible iCloud.com data (not Health, for instance) and only for an hour. If you make heavy use of iCloud.com, Advanced Data Protection may be burdensome.
- Data sharing: When you share notes, reminders, and iCloud Drive folders or use iCloud Shared Photo Library, all the data remains end-to-end encrypted and available only on the participants’ devices as long as all users involved in sharing have Advanced Data Protection turned on. Sharing with anyone who’s not using Advanced Data Protection or using the “anyone with a link” option when sharing makes the content available to Apple servers using Apple-controlled keys.
- Collaboration: The iWork collaboration capabilities and the Shared Albums feature of Photos don’t support Advanced Data Protection. The real-time collaboration in iWork requires server-side mediation to coordinate document changes, so Apple has to maintain those keys. Since Shared Albums can be publicly shared on the Web, Apple also has to manage keys for that data.
- Third-party apps: Developers whose apps share data via iCloud must mark CloudKit fields as encrypted to have them protected by Advanced Data Protection, and it automatically protects all CloudKit assets.
- Metadata: For iCloud interface and optimization reasons, Apple retains the keys for some metadata associated with iCloud data types that are otherwise protected by Advanced Data Protection. That includes, for instance, the name, model, color, and serial number of the device associated with each backup and a list of apps and file formats included in the backup. Apple says it is working to include more metadata in Advanced Data Protection.
Advanced Data Protection Requirements and Timing
To enable Advanced Data Protection, your account must have two-factor authentication enabled for your Apple ID and a passcode or password set on your devices. Apple says that over 95% of active iCloud accounts use two-factor authentication. (And if you don’t have a passcode on your iPhone for some unfathomable reason, set one immediately. I’m looking at you, Alex.)
More problematic is Advanced Data Protection’s requirement that all devices where you’re signed in with your Apple ID must be updated to iOS 16.2, iPadOS 16.2, macOS 13.1, tvOS 16.2, watchOS 9.2, or the latest version of iCloud for Windows. That’s because older versions wouldn’t know to maintain newly created keys on the device and would try to upload them to Apple’s servers in what the company calls “a misguided attempt to repair the account state.” As a result, you’ll have to sign out of iCloud on any device too old to upgrade to the necessary operating system version. (This requirement may be a deal-breaker for me since I have numerous elderly devices that remain in some level of use.)
Unsurprisingly, Advanced Data Protection is available only for regular Apple IDs. Managed Apple IDs (for employees to use for business purposes or instructors and students to use for educational purposes) and child accounts can’t enable the option.
Apple says Advanced Data Protection for iCloud is available now for those testing betas of Apple’s operating systems and will be available for all US users by the end of 2022. It will start rolling out to users in the rest of the world in early 2023 and may be available worldwide by the end of 2023.
Downstream Effects of Advanced Data Protection
In an interview with Joanna Stern of the Wall Street Journal, Apple’s Craig Federighi said that the global release would include China, and he hadn’t heard complaints from the Chinese government, which generally frowns on technology that prevents state surveillance. It doesn’t seem inconceivable that China allowed Apple to provide Advanced Data Protection in exchange for a China-specific tweak in the recent iOS 16.1.1, which limits AirDrop from being accessible to “Everyone” for more than 10 minutes (AirDrop was being used by protesters). Betas of iOS 16.2 include the same change for all other iPhone users, which, while nominally a loss of functionality, would prevent random creeps from using AirDrop to send nudes to nearby iPhone users.
Finally, you may also remember the furor surrounding Apple’s botched 2021 proposal to scan on-device images for CSAM—child sexual abuse material. Those perturbed by the privacy implications of Apple’s CSAM-detection proposal called instead for the company to live up to its privacy promises and implement end-to-end encryption for iCloud Photos. Advanced Data Protection does just that, raising the question of the status of Apple’s CSAM plans and prompting an update from Apple.
It seems that I was correct with my second suggestion in “Apple Delays CSAM Detection Launch” (3 September 2021)—that the delay was “a face-saving way for Apple to drop the technology like the hot potato it became.” Apple told Wired (emphasis mine):
After extensive consultation with experts to gather feedback on child protection initiatives we proposed last year, we are deepening our investment in the Communication Safety feature that we first made available in December 2021. We have further decided to not move forward with our previously proposed CSAM detection tool for iCloud Photos. Children can be protected without companies combing through personal data, and we will continue working with governments, child advocates, and other companies to help protect young people, preserve their right to privacy, and make the internet a safer place for children and for us all.
Apple also said it wasn’t ready to announce a specific timeline for expanding its Communication Safety feature, but it is working on enabling Messages to detect nudity in transmitted videos when protection is enabled.
Overall, Advanced Data Protection seems like a major positive move on Apple’s part. Once it ships, I’ll give it a try and see what the practical effect is on old devices that can’t run the latest operating systems.
Artificial intelligence (AI) has progressed in fits and starts for 70 years. It’s one of those technologies, like commercial fusion power, that’s always 20 years away. Now we may actually be on the cusp of an AI revolution. But it’s not the one you’re expecting.
We’ve become accustomed to machine learning (ML), where a neural network is trained on a large number of samples until it can recognize items on its own. Google and Apple use ML to identify objects in your pictures. Search for “mountain” or “dog” in your pictures, and your phone will find them, not because you’ve tagged your pictures, but because Photos has been trained to recognize images containing those items. Text-to-image systems like Stable Diffusion are trained with millions of pictures and can generate an image based on a text description, like “a penguin eating a donut on a tropical beach.” While useful and certainly thought-provoking with respect to its exploitation of and commentary on creativity, no one thinks of this as human-level cognition.
On the other extreme, Artificial General Intelligence (AGI) is the name given to a fully sentient computer system of at least human-level intelligence. This is the stuff of science fiction movies, from HAL in 2001: A Space Odyssey to the virtual assistant Samantha (voiced by Scarlett Johansson) in Her. To be clear, AGI doesn’t exist, and no one knows how to build an AGI yet.
Let’s Have a Chat
But something new and game-changing just shipped: ChatGPT. It comes from the OpenAI research and development company, which has been working on it for years. ChatGPT is currently free—you can create an account and talk to it. It’s built on version 3.5 of the deep-learning system GPT (Generative Pre-trained Transformer, glad you asked?), which uses AI to produce text that reads as though a human wrote it.
OpenAI’s technology, while complex, is based on the simple idea of pattern matching and extension. If you feed the pattern-matching engine every work of Shakespeare, you can then ask it to write more Shakespeare, and it will generate new text in the style of Shakespeare. (Much easier than rounding up an infinite number of monkeys and typewriters.) If you feed GPT everything available on the Internet, you can give it a prompt about anything, and it will generate something interesting. That’s more or less what OpenAI has done with ChatGPT in chatbot form, and it’s tremendously impressive.
Yes, Thomas Hobbes believed in the concept of separation of powers. In his book “Leviathan,” Hobbes argued that a strong, centralized government is necessary to maintain order and prevent conflict in society. However, he also believed that a single ruler or group should not have complete control over all aspects of government. Instead, he advocated for the separation of powers into different branches, with each branch having specific responsibilities and powers.
This is fascinating. It’s concise, well-written English.
It’s also wrong. As Thompson points out, John Locke believed in separation of powers, Thomas Hobbes did not. They’re often discussed together, so ChatGPT probably ingested many essays contrasting Locke, Hobbes, and separation of powers and associated the general concepts without attributing the details correctly.
It would be a mistake to think that ChatGPT “knows” who Locke and Hobbes are and associates a set of beliefs with each man. That would imply that it maintains a database of knowledge, and OpenAI could correct the database to associate the right views with each man. ChatGPT doesn’t work like that. It’s a statistical model of what the likely next word or sentence is, based on what came before. It does have a feedback mechanism, but it’s designed more to train the model to go in a different direction in response to a question than to correct a particular wrong fact.
In a podcast, Benedict Evans suggested thinking of ChatGPT as a new kind of Internet search engine. When you ask Google search a question, it returns links to the Web pages most likely to contain relevant information. If you ask ChatGPT a question, it summarizes everything it’s read about that on the Internet.
However, where a search engine has multiple ways of ranking the quality of the pages it returns, ChatGPT reflects whatever it finds in its training material, warts and all, presumably with a bias toward the most common groupings of words and sentences. Since that largely comes from the Internet, and thus from human beings, it contains all the ugly things humans do. OpenAI has tried to keep ChatGPT from reflecting that bigotry. It won’t take the bait if you ask it obviously racist questions.
Implications of Summarizing the Internet
This result shows that ChatGPT is complex enough that there’s no simple way to say “don’t be evil.” Again, it has no database of knowledge in which OpenAI could label certain ideas as “bad” and tell ChatGPT to avoid them. It’s a stochastic prediction model that just picks the next words based on statistical training.
Another interesting trick people have discovered is asking ChatGPT to generate or run computer programs. A simple example is asking GPTChat to simulate a Unix shell. You type in a shell command like
ls, and ChatGPT responds exactly as a Unix shell would. (OpenAI has since tweaked ChatGPT not to respond to Unix commands.)
It’s easy to think that since ChatGPT is actually a computer program, it’s simply running this command for you, like a real Unix shell. This is wrong. It’s going through millions of pieces of training data showing how Unix shells respond, and it’s returning its best guess at the correct text. It has no understanding that a Unix shell is a computer program, while Shakespeare was a person.
Similarly, Thompson asked ChatGPT what 4839 + 3948 – 45 is. It said 8732, and when Adam Engst tried while editing this article, it answered 8632. Both answers are wrong—it should be 8742. Again, ChatGPT may be a computer program, but it isn’t doing any arithmetic. It’s looking through its huge text model for the most likely next words, and its training data was both wrong and inconsistent. But at least it showed its work!
This is why even though ChatGPT can generate computer code, I wouldn’t use it in a real program. Its answers are not necessarily correct; they’re just based on Internet training data. It’s likely to return errors both subtle and blatant. Unlike human language, computer programs need to be 100% correct. That’s why Stack Overflow banned ChatGPT-generated code.
While AI is unlikely to put programmers out of work anytime soon, it is coming for many other professions. ChatGPT-inspired systems will undoubtedly take over from today’s weak support chatbots and start replacing human customer-support representatives. The writing is on the wall for higher-end jobs too. Research assistants of all kinds may be replaced by programs that can summarize the current state of knowledge on any subject, at least what’s available on the Internet. “Content farm” websites already use the likes of GPT to auto-generate text—when will it be good enough for mid-tier sites writing about sports, movies, celebrities, and any other topic where speed, quantity, and low cost are more important than a human journalist’s quality, accuracy, and voice? Will lawyers lose simple bread-and-butter contract work to AIs? (Answer: yes.)
As blogger Kevin Drum noted:
A world full of lawyers and professors and journalists who are able to calmly accept the prospect of millions of unemployed truck drivers will probably be a wee bit more upset at the prospect of millions of unemployed lawyers, professors, and journalists.
ChatGPT really is the leading edge of a massive wave of AI that’s about to wash over society. You can see in the many dialogues posted online that it’s pretty good at answering questions and being pleasantly conversational.
Evolution of Chatbots and Societal Implications
There’s still plenty of room to improve, however. Currently, ChatGPT doesn’t have much “state”—that is, it doesn’t really remember what you’re talking about from question to question. If you were to ask it, “When was Super Bowl 50?” it may reply, “2016.” If you then ask, “Who won?” it would have to retain state information from the last question to realize you’re asking who won that particular Super Bowl.
Retaining state is easy for humans but hard for computers. That’s especially true if you bring up a conversation from a few days ago that involves multiple people and places, and you refer to “him” and “her” and “there” rather than actual names, as humans tend to do. If you’re planning a trip to Paris next week, and you ask your spouse, “Have they started packing?” (in reference to your kids), your spouse will know what you mean, whereas a computer won’t. But this shortcoming will likely be addressed soon. Our AIs will have a persistent memory of the people and events in our lives.
The next step will be giving ChatGPT a voice, integrating it with Siri, Alexa, or Google Assistant, so we can just talk to it. The state of the art in computer-generated voices is already good and will continue to improve until it sounds sufficiently like a person that it’s not immediately obvious you’re talking to a computer. Celebrity voices might even become popular, so you could have Google Assistant sound like Scarlett Johansson for a nominal fee. (Google already has celebrity voices in its Waze GPS navigation app.)
Once there’s a voice interface, people will start having long, private conversations with their AIs. They will develop an emotional relationship (which, to be fair, people have been doing since the original ELIZA chatbot debuted in 1966). No matter how often computer scientists tell people that an AI is not intelligent, that it’s just a statistical language model, people will ascribe feelings, desires, and sentience to it.
There will be good and bad uses for this technology. Older people and shut-ins will have someone to talk to and keep them company. Those with autism might find it an untiring conversational companion, something we’ve already seen with Siri. Small children could develop an unhealthy co-dependence on their friendly AI, talking to it every night as they go to bed, unable to understand why mommy and daddy won’t let AI chat with them at the dinner table.
Say hello to your new best friend. The privacy implications alone are enough to give George Orwell nightmares.
Computer scientists and philosophers have pondered for years if it’s possible to create a conscious computer program. We may get programs that almost everyone thinks are intelligent and talks to as if they’re intelligent, even though programmers can show there’s nothing intelligent going on inside them. It’s just complex pattern-matching. The computer scientist Edsger Dijkstra said, “The question of whether machines can think is about as relevant as the question of whether submarines can swim.”
The societal implications of everyone having an electronic best friend, who they can talk to privately, whenever they want, about whatever they want, as long as they want, are hard to predict. Who knows what this will do to our ability to communicate with each other? But if you think parents complain that kids spend too much time texting with one another and playing video games with (human) friends online, you ain’t seen nothing yet.