Skip to content
Thoughtful, detailed coverage of everything Apple for 34 years
and the TidBITS Content Network for Apple professionals
4 comments

Apple Explains Pullback from CSAM Photo-Scanning

Two years ago, Apple first announced a photo-scanning technology aimed at detecting CSAM—child sexual abuse material—and then, after receiving widespread criticism, put those plans on hold. Read “Apple Delays CSAM Detection Launch” (3 September 2021) for our last article, which links to the rest of our coverage. In December 2022, Apple told Wired that those plans were dead, something I missed at the time, but gave no indication of why it was shelving its proposal.

Now, in response to a child safety group, Apple has explained its reasoning, with the company’s director of User Privacy and Child Safety Erik Neuenschwander writing:

We decided to not proceed with the proposal for a hybrid client-server approach to CSAM detection for iCloud Photos from a few years ago, for a number of good reasons. After having consulted extensively with child safety advocates, human rights organizations, privacy and security technologists, and academics, and having considered scanning technology from virtually every angle, we concluded it was not practically possible to implement without ultimately imperiling the security and privacy of our users.

Wired’s article includes a PDF of Neuenschwander’s letter, which says Apple came to believe that scanning photos uploaded to iCloud Photos could potentially create new attack vectors, trigger a slippery slope of unintended consequences, and sweep innocent parties into “dystopian dragnets.” In this regard, Apple’s messaging now lines up with its resistance to legislative proposals that seek back doors into end-to-end-encrypted messaging technologies.

It’s important to realize that although Apple speaks with a single voice when it makes public announcements, there are many voices within the company. Given Apple’s uncharacteristically hamfisted job with the CSAM announcement, I suspect there was significant internal contention surrounding the CSAM proposal, especially given that fighting the horror of child sexual abuse and protecting user privacy are both highly laudable goals. But once criticism hit a certain level, those troubled by the possibility of scanning photos in iCloud Photos opening doors to digital thieves and government intelligence agencies gained ascendance in the debate.

Neuenschwander said Apple is focusing its efforts on its Communication Safety technology:

Communication Safety is designed to intervene and offer helpful resources to children when they receive or attempt to send messages that contain nudity. The goal is to disrupt grooming of children by making it harder for predators to normalize this behavior.

In its next major operating system releases, Apple is expanding Communication Safety to cover video and photos, turning the feature on by default for all child accounts, and integrating it into AirDrop, the Photo picker, FaceTime video messages, and Contact Posters in the Phone app. Plus, Apple has opened the Communication Safety API up to independent developers so they can build such capabilities into other communication apps.

Subscribe today so you don’t miss any TidBITS articles!

Every week you’ll get tech tips, in-depth reviews, and insightful news analysis for discerning Apple users. For over 33 years, we’ve published professional, member-supported tech journalism that makes you smarter.

Registration confirmation will be emailed to you.

This site is protected by reCAPTCHA. The Google Privacy Policy and Terms of Service apply.

Comments About Apple Explains Pullback from CSAM Photo-Scanning

Notable Replies

  1. They pretty much came to agree with the points that a lot of folks generally (and in earlier TidBITS threads) were saying. From the letter:

    It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types (such as images, videos, text, or audio) and content categories. How can users be assured that a tool for one type of surveillance has not been reconfigured to surveil for other content such as political activity or religious persecution? Tools of mass surveillance have widespread negative implications for freedom of speech and, by extension, democracy as a whole. Also, designing this technology for one government could require applications for other countries across new data types.

  2. And Nick Heer has some thoughts about how this intersects with the UK efforts to require back doors for end-to-end-encryption technologies.

  3. Interesting article – I do think Nick is misinterpreting what the British minister said. Translated from political-speak, he’s essentially saying that 1) the British government won’t make companies do something they can’t; 2) it will consult with them to see if building the capability is possible 3) if the company convincingly argues that it’s not possible, then the Govt can’t hold them liable for not doing it.

    It’s a definite retreat for the British government and it’s actually not a bad final policy to settle on.

  4. Ken

    With a lot of these AI applications no one talks about false positives. These will generate a lot of manual inspecting of users photos, possibly including looking at their full library. Then they have to inform the police, who will decide whether to investigate.

Join the discussion in the TidBITS Discourse forum

Participants

Avatar for ace Avatar for silbey Avatar for ken10