Image by Apple
Apple Announces Siri Privacy Reforms
The Apple world was stunned last month when a whistleblower revealed that Apple contractors were listening to recorded Siri interactions, a process Apple calls “grading” and that is intended to improve Siri interactions (see “Apple Workers May Be Listening to Your Siri Conversations,” 29 July 2019). It’s a common industry practice—Amazon, Google, and Microsoft all do it to improve their respective voice assistants—but Apple users had assumed that Apple’s stance that privacy is a fundamental human right would preclude such clearly creepy behavior.
Apple quickly suspended the program and told TechCrunch that it would be making changes (see “Apple Suspends Siri’s “Response Grading” Eavesdropping,” 2 August 2019). Now Apple has apologized formally, saying:
As a result of our review, we realize we haven’t been fully living up to our high ideals, and for that we apologize.
The company also said that it plans to resume the grading program once updated versions of its operating systems become available within the next few months, with the following changes:
- The resumed program will be opt-in only.
- Apple will no longer retain audio recordings from Siri but will continue to rely on computer-generated transcripts to improve Siri.
- Only Apple employees will be allowed to listen to Siri audio samples.
- Apple will “work to delete” any recordings of inadvertently triggered Siri interactions.
In a support note titled “Siri Privacy and Grading,” Apple also pointed out that less than 0.2% of Siri requests were reviewed under the grading program and that Siri uses a random identifier while processing your data, which is never tied to an Apple ID. Apple said that Siri uses as little data as possible to answer queries. For instance, when you ask Siri to read your messages, the text of those messages are never passed to a server.
Apple is doing the right thing here, but it’s regrettable that it took a whistleblower to prompt this change. That fact alone damages Apple’s privacy-focused image. It’s also unfortunate that Apple isn’t taking Adam Engst’s suggestion to empower Siri users by letting them make their own corrections—see “Why Can’t Users Teach Siri about Its Mistakes?” (14 August 2019). And there’s a human cost to Apple’s privacy reforms: at least 300 contractors have lost their jobs as a result.
I would guess that the “listening” is a random data stream out of million if not billions of Siri voice streams. We the users are always the beta tester finding bugs in software. Why not Siri? As long as Apple is using data streams without any DNS or meta data attached, and improving Siri (and does it need improving!) why should I care?
Apple shouldn’t have been doing this and it would be nice if they followed Adam’s suggestion of automating it so users could teach Siri - However, the audio snippets are anonymous. To be honest this whole thing smells of a strategy of making Apple take the blame for what every other company also does…and does more egregiously. It looks like they found a way to make it so that whenever any user thinks about any voice assistant snooping on their audio, the users instantly think “Apple & violation of privacy”. It obfuscates the fact that most of the other major privacy violation companies are doing the same thing, only not anonymized and maybe even aggregated with other data. Recent reports are that, like Android, Alexa exists primarily as a surveillance tool.
De-anonymization of large data sets is quite practical, as has been known for some time. Audio snippets would be even simpler, as the content is likely to leak significant information (names, locations, phone numbers, appointments) without even considering available technologies like voiceprint matching.
Well, Apple has painted a large target on its back by saying that privacy is a fundamental human right. So it feels reasonable to point out when the company is failing to live up to that stance. Apple deserved the criticism and has apparently responded appropriately.
And there has been plenty of coverage of Amazon and Google doing exactly the same thing, some of it even here. Those companies also deserve the criticism and will hopefully adjust their practices as well.
I don’t disagree with your assessment Adam. They tout privacy, they screwed up and should be held accountable and you outlined a much better way they could choose to proceed. I’m not sure I agree with the other statements about de-anonymization. I suspect that while that may be true with genetic data and many other forms of data, that an isolated audio snippet is unlikely to present a de-anonymization risk, depending on what metadata is associated with it. I suspect that Apple likely had little to none associated with it … but I don’t know that.
My main point was more about the suspicious timing of this announcement and how Apple’s enemies seem to have found a way to muddy the waters around privacy and make its so that Apple (the company who seems to care at least a little about privacy) is on everyones lips, rather than themselves, when the risks of digital assistants is mentioned. Little Apple does at this point will change that in the minds of the general public, and I find that unfortunate and unfair. I find it unfair because I believe that what they do is 1000x worse, invasive and intentional than Apple’s mess up.
The original report came from a whistleblower talking to the Guardian, so I don’t know that there’s any timing in play. The Guardian has no reason to sit on the story and risk losing the scoop. And, with Apple making major announcements a few times per year, nearly any negative news is likely to happen sometime around an Apple announcement.
The question of whether Apple gets more negative press than other tech giants with worse privacy records is an interesting one. I’d love to see data on that. I suspect we feel that Apple gets singled out through selection bias—we’re paying attention to Apple-related sites and Apple in the mainstream news. But there are also vast ecosystems of sites that cover Android and Amazon, and we may simply not pay as much attention when they’re criticized in those spots or in mainstream media.
Google has now tightened the privacy surrounding its voice assistant grading program too.
Join the discussion in the TidBITS Discourse forum