Skip to content
Thoughtful, detailed coverage of everything Apple for 34 years
and the TidBITS Content Network for Apple professionals
74 comments

New CSAM Detection Details Emerge Following Craig Federighi Interview

It isn’t often Apple admits it messed up, much less on an announcement, but there was a lot of mea culpa in this interview of Apple software chief Craig Federighi by the Wall Street Journal’s Joanna Stern. The interview followed the company’s first explanations a week ago of how it would work to curb the spread of known child sexual abuse material (CSAM) and separately reduce the potential for minors to be exposed to sexual images in Messages (see “FAQ about Apple’s Expanded Protections for Children,” 7 August 2021).

Apple’s false-footed path started with a confusing page on its website that conflated the two unrelated protections for children. It then released a follow-up FAQ that failed to answer many frequently asked questions, held media briefings that may (or may not?) have added new information, trotted out lower-level executives for interviews, and ended up offering Federighi for the Wall Street Journal interview. After CEO Tim Cook, Federighi is easily Apple’s second-most recognized executive, showing how seriously the company has been forced to work to get its story straight. Following the interview’s release, Apple posted yet another explanatory document, “Security Threat Model Review of Apple’s Child Safety Features.” And Bloomberg reports that Apple has warned staff to be ready for questions. Talk about a PR train wreck.

(Our apologies for the out-of-context nature of the rest of this article if you’re coming in late. There’s just too much to recap, so be sure to read Apple’s previously published materials and our coverage linked above first.)

In the Wall Street Journal interview, Stern extracted substantially more detail from Federighi about what Apple is and isn’t scanning and how CSAM will be recognized and reported. She also interspersed even better clarifications of the two unrelated technologies Apple announced at the same time: CSAM Detection and Communications Safety in Messages.

The primary revelation from Federighi is that Apple built “multiple levels of auditability” into the CSAM detection. He told Stern:

We ship the same software in China with the same database as we ship in America, as we ship in Europe. If someone were to come to Apple [with a request to match other types of content], Apple would say no. But let’s say you aren’t confident. You don’t want to just rely on Apple saying no. You want to be sure that Apple couldn’t get away with it if we said yes. Well, that was the bar we set for ourselves, in releasing this kind of system. There are multiple levels of auditability, and so we’re making sure that you don’t have to trust any one entity, or even any one country, as far as what images are part of this process.

This was the first time that Apple mentioned auditability within the CSAM detection system, much less multiple levels of it. Federighi also revealed that 30 images must be matched during upload to iCloud Photos before Apple can decrypt the matching images through the corresponding “safety vouchers.” Most people probably also didn’t realize that Apple ships the same version of each of its operating systems across every market. But that’s all that was said about auditability in the video interview.

Apple followed the interview with the release of another document, the “Security Threat Model Review of Apple’s Child Safety Features,” which is clearly what Federighi had in mind when referring to multiple levels of auditability. It provides all sorts of new information on both architecture and auditability.

While this latest document better explains the CSAM detection system in general, we suspect that Apple also added some details to the system in response to the firestorm of controversy. Would Apple otherwise have published the necessary information for users—or security researchers—to verify that the on-device database of CSAM hashes was intact? Would there have been any discussion of third-party auditing of the system on Apple’s campus? Regardless, here is the new information that struck us as most important:

  • The on-device CSAM hash database is actually generated from the intersection of at least two databases of known illegal CSAM images from child safety organizations not under the jurisdiction of the same government. It initially appeared—and Apple’s comments indicated—that it would use only the National Center for Missing and Exploited Children (NCMEC) database of hashes. Only CSAM image hashes that exist in both databases are included. Even if non-CSAM images were somehow added to the NCMEC CSAM database or other, hitherto unknown CSAM databases, through error or coercion, it’s implausible that all could be exploited in the same way.
  • Because Apple distributes the same version of each of its operating systems globally and the encrypted CSAM hash database is bundled rather than being downloaded or updated over the Internet, Apple claims that security researchers will be able to inspect every release. We might speculate that Apple dropped a lawsuit against security firm Corellium over its software (which allows security experts to run virtualized iOS devices for research purposes) to add credibility to its claim of the likelihood of outside inspection.
  • Apple says it will publish a Knowledge Base article containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature. Researchers (and average users) will be able to compare the root hash of the encrypted database present on their device to the expected root hash in the Knowledge Base article. Again, Apple suggests that security researchers will be able to verify this system. We believe this is the case based on how Apple uses cryptography to protect its operating systems against modification.
  • This hashing of the database approach also enables third-party auditing. Apple says it can—in a secure on-campus environment—provide an auditor technical proof that the intersection of hashes and blinding were performed correctly. The suggestion is that participating child safety organizations might wish to perform such an audit.
  • NeuralHash doesn’t rely on machine-learning classification, the way Photos can identify pictures of cats, for instance. Instead, NeuralHash is purely an algorithm designed to validate that one image is the same as another, even if one of the images has been altered in certain ways, like resizing, cropping, and recoloring. In Apple’s tests against 100 million non-CSAM images, it encountered 3 false positives when compared against NCMEC’s database. In a separate test of 500,000 adult pornography images matched against NCMEC’s database, it found no false positives.
  • As revealed in the interview, Apple’s initial match threshold is expected to be 30 images. That means that someone would have to have at least that many images of known illegal CSAM being uploaded to iCloud Photos before Apple’s system would even know any matches took place. At that point, human reviewers would be notified to review the low-resolution previews bundled with the matches that can be decrypted since the threshold was exceeded. That threshold should ensure that even an extremely unlikely false positive has no ill effect.
  • Since Apple’s reviewers aren’t legally allowed to view the original databases of known CSAM, all they can do is confirm is that decrypted preview images appear to be CSAM, not that they match known CSAM. (One expects the images to be detailed enough to recognize human nudity without identifying individuals.) If a reviewer thinks the images are CSAM, Apple suspends the account and hands the entire matter off to NCMEC, which performs the actual comparison and can bring in law enforcement.
  • Throughout the document, Apple repeatedly uses the phrase “is subject to code inspection by security researchers like all other iOS device-side security claims.” That’s not auditing per se, but it indicates that Apple knows that security researchers try to confirm its security claims and is encouraging them to dig into these particular areas. It would be a huge reputational and financial win for a researcher to identify a vulnerability in the CSAM detection system, so Apple is likely correct in suggesting that its operating system releases will be subject to even more scrutiny than before.

I’m perplexed by just how thoroughly Apple botched this announcement. The initial materials raised too many questions that lacked satisfactory answers for both technical and non-technical users, even after multiple subsequent interviews and documents. It seems that Apple felt that for saying anything at all, we’d pat them on the back and thank them for being so transparent. After all, other cloud-based photo storage providers are already scanning all uploaded photos for CSAM without telling their users—Facebook filed over 20 million CSAM reports with NCMEC in 2020 alone.

But Apple badly underestimated the extent to which implying “Trust us” didn’t mesh with “What happens on your iPhone, stays on your iPhone.” It now appears that Apple is asking us instead to “Trust, but verify” (a phrase with a fascinating history originating as a paraphrase of Vladimir Lenin and Joseph Stalin before being popularized in English by Ronald Reagan). We’ll see how security and privacy experts respond to these new revelations, but at least Apple now seems to be trying harder to share all the relevant details.

Apple's CES privacy billboard.

Subscribe today so you don’t miss any TidBITS articles!

Every week you’ll get tech tips, in-depth reviews, and insightful news analysis for discerning Apple users. For over 33 years, we’ve published professional, member-supported tech journalism that makes you smarter.

Registration confirmation will be emailed to you.

This site is protected by reCAPTCHA. The Google Privacy Policy and Terms of Service apply.

Comments About New CSAM Detection Details Emerge Following Craig Federighi Interview

Notable Replies

  1. This is what I am finding the most interesting aspect of this whole thing.

    When folks on TidBITS-Talk – some of the most educated and Apple-knowledgable people on the planet – are confused and can’t get a proper understanding of what’s going on, there’s something seriously wrong with Apple’s messaging.

    Part of this is the complexity of the topic, the technical aspects of the CSAM-detection system Apple has developed, the releasing of both Messages and iCloud scanning systems at the same time (even though they’re completely different), and most of all, I think a bit of “can’t see the forest for the trees” on the part of Apple executives.

    Here’s my theory on how this all went down within Apple. A some point in the past, probably several years ago, the idea of searching for CSAM was broached at Apple. The traditional way that cloud services do this – by scanning customer’s pictures via machine learning on the server-side – was immediately dismissed as too invasive. Server scanning requires all data be unencrypted, the code can be changed at any time without the user’s knowledge, and other countries could demand the server search for all sorts of stuff. Apple has been heavily promoting privacy, so that approach was a no-go. Apple set about coming up with an alternative.

    I bet this took considerable time and effort (I’m guessing several years), but Apple found an ingenious (though complex) way of searching for CSAM without actually examining customer’s photos, and doing it on-device in such a way that neither the person’s phone or Apple even knows if there’s a match found. Mathematically, Apple can’t know if there’s a match unless there are at least 30 of them.

    From Apple’s perspective, they go way out of their to create a privacy-centric method of doing this. But they’ve been working on it for years and have a deep technical knowledge of the alternatives. Apple likes to be up-front, so they announced this in a rather technical manner, getting into way too many details for the average person to understand, yet not enough for true technical people. Apple fully expected everyone to understand this as privacy-first, a great solution to the problems of server-side scanning.

    Instead, everyone saw this as privacy-invasive. All people heard was “Apple is scanning the photos on your phone looking for porn” and got upset. Others yelled that this was a “backdoor” that countries and hackers could exploit.

    While I do think Apple should have had the foresight to expect this kind of reaction, I do see how they could have missed it. They literally created this system to avoid all the problems people are complaining about – and they just assumed that everyone would understand that.

    Now they have real PR mess on their hands and I’m not sure it’s easy to get out. From the discussion on TidBITS-Talk it is clear that many have made their snap judgment on this and won’t back down or change their mind. Those who see this system as a “backdoor” can’t be convinced otherwise.

    I thought this was outrageous when I first heard about it – but I waited to read the technical details before I made my judgement. The more I read, the more convinced I am that this system is safe and harmless. It may not be 100% perfect, but it’s pretty close – and a darn sight better than what every other company is doing, which is far more invasive and prone to abuse.

    But with so many making up their mind on what they think they know, rather than what’s actually happening, I have no idea how Apple can rectify the situation. Those who feel “betrayed” by Apple will continue to feel that – even though Apple specifically went to extraordinary lengths to create a system that protects their privacy.

    This is a great lesson in PR and quite possibly will be studied in schools years from now. Just amazing.

  2. I don’t think that a lot of us are confused at all. I’ve understood what Apple is doing since the beginning. The specific details have added to that knowledge but haven’t changed the general understanding,

    What also hasn’t changed is the reaction, and here I’d agree that Apple misunderstood how this was going to play to a general audience. They should have had Cook talking from the start.

  3. I think you’re spot on. It may be a lesson in being too close to the subject and not getting outside opinion (even from family, say). When I first explained the system to my mother briefly, she saw it as a slippery slope instantly, and she’s not conspiracy-minded. Now I have to explain it to her again.

    And in fact, when I look back at Glenn and Rich’s original article, I think it’s as accurate as it could have been with what was known at the time. Some people were confused by Apple seemingly conflating multiple unrelated technologies, and others raised legitimate concerns that couldn’t be easily or even adequately answered with what we knew then.

    That difficulty was what troubled me at a low level—I felt that there likely were answers, but because I couldn’t provide them to my satisfaction, it seemed that either my knowledge was incomplete (which turned out to be true) or that Apple was indeed either hiding something (which didn’t make sense in the context of a system that could have gone entirely under the radar). The other possibilities, that Apple hadn’t developed a complete threat model or had made unwarranted assumptions in the code, also seemed unlikely given the obvious amount of thought and complexity that was initially revealed.

    And yes, this should have been Tim Cook from the very beginning. When your CEO is banging the privacy drum for years, you don’t hand off a privacy-intensive announcement to Web pages, FAQs, and lower-level people.

  4. Bringing the state actor discussion over from another thread:

    So let’s think this through.

    1. We now know that Apple’s CSAM hash database is the intersection of several other CSAM hash databases, NCMEC’s and at least one from another non-US organization. So a state actor would have to subvert not one, but two or more organizations like NCMEC. Not unthinkable, but significantly harder and more likely to result in exposure.

    2. Let’s say Apple’s CSAM hash database is subverted. The only thing that could be put in there would be CSAM itself. Remember, Apple’s human reviewers only see the “visual derivative” of the matched CSAM (and only after there are 30 matches). So if Apple’s reviewers see an image of a dissident or whatever, they’ll chalk it up to a false positive and will send it to Apple engineering for analysis. Which might lead to the exposure of the subversion of the CSAM hash database, since the engineers wouldn’t rest until they figured out how NeuralHash failed (which it didn’t). I have trouble seeing what kinds of images would be useful to gather this way as well, since they have to be known in advance to be matched.

    3. So now let’s say that the state actor subverts Apple’s CSAM hash database with more CSAM, with the idea of planting it on the devices of dissidents to discredit them. This is apparently common in Russia. Apple will get the matches, confirm the CSAM, and report the person to NCMEC, and thus US law enforcement. If the person in question is a Russian citizen not on US soil, it seems unlikely that anything of interest happens, unless US law enforcement regularly cooperates with places like Russia on such investigations. This seems like a possible attack vector, but given that Russian dissidents also suffer from poisoning, it seems like way more work than is worth the effort. See xkcd.

    4. What about a state actor that completely subverts Apple? There’s a true backdoor in the code (one that no one knows about—it’s not a backdoor if it has been publicized), Apple’s human reviewers have been forced to identify dissidents when their images show up (but how is this useful if the images are already known?), and Apple reports to the state actor instead of just NCMEC. At this point, we’re so deep in the conspiracy theory that you may as well assume every photo you take on your iPhone is being sent directly to China or whatever. The only answer to this level of conspiracy theory is that there are numerous security researchers looking at iOS at all times, and something like this would likely be discovered. And if Apple was so completely subverted, why would they make a public announcement of all this tech?

    Am I missing any attack vectors?

  5. A state actor forces Apple to directly add images to the hash database without going through the intervening organizations and requires Apple to report positive matches to them without verification. The state actor doesn’t care if this is publicized — in fact they prefer it because even the threat of such surveillance intimidates people.

    This is pretty much what China did with iCloud. We know they control the keys, but China doesn’t care.

    Why would, eg, China, do this rather than roll their own system? Because it’s less effort for them, and they can take advantage of all of Apple’s programming skills and knowledge of the system.

  6. I would say that China would not want this system. Why should they bother to get Apple involved at all when they are running the servers, which already store the images without encryption. It’s far easier for them to have their own engineers write their own software to scan the entire server for whatever images they want. Then they can change their database at whim without needing Apple to push it out to the rest of the world via iOS updates.

    Remember, this is the same country that scans all internet traffic passing through the country, routinely blocking anything and everything the government might not want seen. Running a machine learning algorithm over a database of a few million image files is no big deal for them.

    I think that while China could do what you say, they have no reason to, and many good reasons to roll their own system.

    Likewise for the US. US law enforcement already requests scans and audits of the contents of iCloud libraries for specific people (usually with warrants). If they were to demand scans of everybody’s account, they could do that right now, with no technical changes to the system. Apple would probably fight it in court (like they did about unlocking iPhones), and win or lose, knowledge of the attempt would quickly become public. And in the US (at least for now), this is something many (not sure about most anymore) elected officials would object to.

  7. And yet Apple has already done that work for them, and even added a global surveillance bonus. So, no, it’s not easier.

    I’d also note that I’m really not comfortable with a system where it’s protected by “we hope China won’t be interested.” Hope is not a plan.

  8. What does that get them, though? This system works only with known images that are matched by NeuralHash, and only if there are a sufficient number of them. How does that help with surveillance? In what scenario would a state actor be interested that someone has a set of known images on their phone rather than new images? Photos of secret weapon blueprints?

    This feels like a subset of the “China can make Apple do anything it wants” attack. If we assume that China has that power, then it’s game over in every way other than independent security researchers discovering and revealing the ways data is being exfiltrated.

    I’m with @Shamino on thinking that a state actor like China would be more interested in controlling its own system than on subverting a highly publicized system that’s going to be the subject of intense scrutiny.

  9. Apple has already said that third parties (namely from child protecting organizations that provide hashes) will be able to audit the hashes to make sure that there are no hashes that are not in their database. Since the hashes are an intersection of hashes from NCMEC and at least one other national source, there should be no hashes that NCMEC doesn’t know of in the set of hashes that Apple uses. China or Russia cannot force a hash that NCMEC doesn’t also provide.

  10. It’s not a hope, there’s just no way that China is going to be interested in this relatively targeted, convoluted system. Their surveillance capability both within and outwith China is already far superior. As is the capabilities of the security services of most major economies. This is not the scale or level of control that these agencies work at. (Unfortunately :cry:)

  11. I agree 100%, and I think that they should have announced this with the endorsement or recognition of the National Center for Missing and Exploited Children, whose database they are referencing, and maybe one or more other child safety focused organizations. And it would have been smarter for Apple to have hired a PR firm that specializes in child safety and advocacy issues to handle this introduction. Or even better, hire internal PR specialists that will focus full time on Apple’s child safety and education related initiatives.

    Business wise, using this to target parents could be a good selling point for iOS devices.

  12. China has dissidents they don’t know about…so they legally require Apple to NeurlHash voodoo a set of images…include those in the scanning…require all photos on the iPhone to be scanned…lower the threshold to 1 or 2…and report all matches to the Chinese government. The technology as described seems to provide those capabilities technically…and it is easy for China to make it the law there. This…the Chinese government now finds out who all the currently unknown dissidents are…and that forces them underground or they go to the gulag or the secret police visit them or whatever…

    Since the NeuralHash is supposedly so sophisticated that modifying the image doesn’t prevent detection…an image of Tienneman (sp.) Square or whatever will similarly survive modification or other slight differences.

    Apple’s previous position was…we won’t compromise user privacy…then they turned around and did so in China because Chinese law requires them to because they won’t Brandon the market there and cannot abandon their manufacturing capability. Now they’ve invented a tech that nefarious state actors can and will figure out how to take advantage of.

    And the real issue is that a pre encrypted image by the user won’t get caught…as will the home grown images. With this public…people interested in this material will just encrypt their files and email them around instead of putting them onto iCloud…so smart perverts won’t get caught. If Apple’s intention is to keep the material off of their servers…simply scan once it is uploaded…as anybody who thinks they have an expectation of privacy for anything uploaded to a cloud service without encryption first just isn’t thinking straight.

  13. Sure they can. The NeuralHash tech can be replicated and additional hashes made…then pass a local law requiring Apple to scan an additional database. State actors don’t need NCEMC’s assistance.

  14. No state actor has to subvert anything. Instead they pass a law that says any CASM scanning that happens on device also has to include scanning against hashes they provide. In the past Apple would have said, we don’t do that so we can’t, have a nice day. Now their answer is well we can’t do that because we only have these two databases and there’s an and statement in between to ensure hashes must be in both, and you see we have this great AI/ML yada… Chairman Xi Jinping interrupts, smiles, and answers: for use in my dear beloved China, turn that little and into an or and add my database of hashes, thankyouyverymuch. Does anybody here seriously believe that Apple could then just say nope, we’re outa here?

    Again, it’s not a technical problem. It’s not about being smart, having well thought out mitigations, or writing beautiful code. In the end, it’s about opening yourself up to exploitation. I’m really surprised that among the many smart people that work at Apple, there seems to be a somewhat engineering-myopic view prevalent that prevents the company from understanding this. At the very least the top execs that have access to more than just engineering resources, should have been warned about this. Perhaps they were. Perhaps they thought, it won’t be that bad. I guess we’ll see.

  15. At this moment, Apple has not announced this initiative will run in any other market than the US. I think it’s safe to assume Apple will not want to implement this app in countries that might want to use it for anything other than than its stated mission. Apple would either tell them no, and maybe it would be no way but the highway. China would loose a LOT of jobs and incoming revenue if Apple took the highway. It’s too much of a loose-loose situation for both parties. And too much work, and terrible PR for Apple to build a service they would not have 100% control of.

    And we don’t even know if China has an equivalent of the National Center for Missing and Exploited Children. There is an International Centre for Missing & Exploited Children that is based in the US, but is has not been mentioned at all. It’s probably because there are too many countries to cover to begin with, and it would probably be the better for Apple to work on a country by country basis anyway. Or Apple might not want to wade into a morass of conflicting regulations between a multitude of governments.

  16. Countries can pass a law requiring companies to scan photos uploaded to their servers now, regardless of what Apple is doing, and whether the scanning happens on device or on the server. I don’t see that Apple implementing this system changes what laws are possible nor the likelihood of them coming about or not. See: China and the iCloud servers for Chinese citizens. States pass the laws they want and tech companies have to figure out how to comply.

    The concern (for me) with a system like the CSAM scanning, is if law enforcement can use its existence to force a company to turn over someone’s data that they wouldn’t otherwise be able to access. It’s whether a system like CSAM scanning allows abuses or further intrusions without the government going through the proper legislative process.

    I’m still not sure where I stand on Apple’s upcoming system, but the additional details have certainly lessened some of my initial concerns.

  17. I don’t think it’s so much about laws being possible or not. A law can’t compel a company to do something they simply cannot do (for example because they don’t have the tech). But a law can very well compel a company to do something they already do in a different manner. I think the San Bernardino case really shows a stark contrast here.

  18. It definitely can. As long as the government thinks it’s possible to develop the tech in some reasonable timeframe, they will make the law, with some implementation period to develop the necessary systems. This has happened many times (recently, GDPR in Europe) and the relevant government is unconcerned with the implementation as long as it meets the law.

    I think the San Bernardino case is a red herring here. I’ve still not seen a convincing argument about how law enforcement would find the CSAM feature useful for anything other than catching child abusers, or how they could somehow subvert the system to do so.

    This is in contrast to state security services – I can see how they theoretically could co-opt the system by forcing Apple to expand the type of images in the hash. But as discussed earlier it seems like a preposterous route given the more powerful tools they have, and the legal challenges once they were found out in certain countries.

  19. Chinese political movements often use 1) pictures of the tank man from 1989, and 2) popular memes as means of signaling and communications. All of these are known images and thus susceptible.

    More, China has done an enormous amount to erase the memory of Tiananmen Square from its history and that includes the known pictures of it. Using the Apple photo scanning feature would be a good way for them to keep that effort going.

    Which feels to me like a “well, look, the murderer is going to get a gun somewhere, so it doesn’t matter if I sell it to them” kind of reaction.

    Why? As I pointed out, the publicity is part of the point. It intimidates Chinese citizens, it demonstrates the PRC’s power, and it makes clear that Apple answers to them. I’m befuddled that anyone thinks that China will care if people know about this. They don’t hide the Great Firewall, they make sure that people are very clear that it’s there. They don’t hide that they have the keys to the Apple iCloud servers. They want people to know.

    People are asking the wrong question: it’s not why will China do this, it’s why won’t China do this? They’re not going to let Apple surveil their citizens without their active participation and it’s to their advantage to be seen controlling the tool.

    Folks, as I mentioned above, I understand what Apple’s doing. Please try to understand the point I’m making. Apple has set up an intricate safety system so that it (and the public) tries to put images into the database. Great. My point is that China will simply tell them to ignore that safety set-up and put the images in anyway, publicity be damned. All the intricacy in the world is not going to help Apple if China tells them to go around it.

    One of the markers of authoritarian governments is the breadth and depth of their surveillance – as you have noted. This is going to be part of it.

    And yet China did exactly that with the iCloud backups and Apple caved.

    Finally, the lack of thought Apple put into the roll out of this does not make me confident that they thought everything through as much as they should have.

  20. I think that there is an enormous difference between Apple refusing to create a special iOS that would bypass security gates that would enable access to the total contents that resided in iCloud of the iPhones of two dead terrorists. Apple is setting up an opt in service, not an iOS, that will screen for the hash codes of pornographic photos that match those of a respected database of known pornography that are being sent to the iPhones of living and vulnerable kids under the age of 13. They are two totally different scenarios and issues, and parents have the opportunity to implement it for their children if they think it’s a good idea.

    This is just about CSAM and giving parents the opportunity to communicate with their children about potentially dangerous situations; nothing else. Apple swears that no government agency is involved, and that the pornographic images of children that match are only those that reside in highly respected database. This might be backdoor, but China or 1984 it ain’t.

  21. I’m not sure why all the concern is about China, but given it keeps coming up…

    Except that this system adds absolutely nothing to the Chinese government’s monitoring capabilities. Why would they pass a list to Apple to eventually add to iOS so that certain images can be flagged and Apple notify them… when they can just directly scan everyone’s iCloud Photo Library on the servers they already have access to?

  22. If they don’t know about these dissidents, how are they going to somehow get lots of known pictures of them into the NCMEC and multiple other child safety databases where the pictures are highly vetted to be known CSAM? Remember, NeuralHash only says “The hash of this iPhone photo that’s being uploaded to iCloud Photos matches a hash in Apple’s intersection-created hash database.” It does NOT do machine-learning type facial recognition.

    That’s absolutely true and not the goal of the system. It is designed to identify quantities of known CSAM that’s uploaded to iCloud Photos.

    That would fail the third-party audit of how Apple creates its database of intersections from multiple child safety databases. So to assume it’s still possible is to go back to the state actor having complete power over Apple, at which point we have independent security researchers to identify abuses.

    As with @neil1’s suggestion above, that will fail the third-party audit of how the Apple database is created from the intersection, unless Apple is going to say that “Oh, this is from the Chinese NCMEC and it’s legit—look, here’s their root hash.” But any independent organization doing the auditing would throw a flag on that.

    And again, what’s the surveillance win in exact matching of known images? What this technology tells you is if an iPhone user has and is uploading existing known images to iCloud Photos, that’s all.

    OK, that’s fair, though pretty specific, and it would seem easily changed by the political movements. It’s in essence a code, and whenever codes are known to be broken, the people using them create new ones.

    It would if it were a radically new capability, but it doesn’t seem any different than the iPhone in general. If China can compel Apple to do anything it wants, why aren’t our iPhones already reporting everything we say and do back to Beijing? China may be able to compel Apple to do some things within the country, but clearly hasn’t been able to make the company do whatever it wishes.

  23. I’m not sure that’s relevant, since the technology will be present in all versions of iOS. Presumably Apple has some way of enabling it for US users only, but in the extreme subversion scenario that is being suggested, the company could theoretically be compelled to enable it for another country too.

    Personally, I’m unconvinced by the extreme subversion scenario. Aside from the fact that it seems like a really hard secret to keep, I think the US government would take a very dim view of US companies being compelled to participate in activities sponsored by another state actor. And I have to assume that Apple has sufficient ties to the US government that such communication would take place.

  24. This is true, and I should have made it clear that I’ll bet Apple has probably been talking to the equivalents of The National Center for Missing and Exploited Children in some of their target countries, probably the EU as well. But the big kerfuffle in the press might have thrown a monkey wrench into the works. And there might be countries that don’t have an equivalent database.

  25. It isn’t the dissidents image they’re interested in…it’s a list of people that have pictures that dissidents are likely to have…Tank Guy at the square for instance and other images they don’t like. I did not suggest that they would get images of the dissidents or that they needed to get them into the NCMEC’s database. China can simply pass a law that requires Apple to scan against their database in addition to the NCEMC one…and also that the threshold be changed to 1 from whatever it is for the current plan…and that Apple report to Chinese authorities. They can also require that Apple scan every photo on the phone against the Chinese database as well…even if iCloud upload is disabled…and report the results.

    Nation states can mandate just about anything…and since the technology to do this already exists Apple would have no choice but to comply.

    I’m going to try…again…well, for the 4th or 5th time actually…to not reply any more to this or related threads…as I said nobody is going to change minds and Apple is gonna do what they are gonna do.

  26. Apple doesn’t have to, and I’ll bet won’t, implement this program in any country that will not meet its standards. Like I said, I’ll bet there are countries that don’t yet have an equivalent database.

  27. I don’t think putting the weight of effort on the dissidents rather than the authoritarian government is really the argument you want to make.

    Also, and as politely as I can make it, part of the issue with this discussion is that one side has very little expertise on the situation in China and thus makes what I would call frankly silly comments about it. The idea that there aren’t common images that China might hunt for made me shake me head in horror and I’m only marginal (one book on historical China and various conference papers coming closer to the present) in my understanding of current China. The vast majority of the voices so far have been from the tech side, and the naïveté has been really remarkable so far (eg Doug Miller’s post two below this one)

    Because China has a set of priorities, budgetary limits, and resource allocation fights, like every other bureaucracy, and that limits them in what they can do. Dedicating Apple’s resources and expertise in a way that makes up for those limitations is not a good idea. I said it earlier, but I’ll repeat, don’t hand the genocidal sociopaths a new weapon, please.

    Again, the question is not why would they, but why wouldn’t they? Ready made surveillance, created by a company that has already caved to them, gives useful information in specifically images which they’re already looking for. Why not?

  28. Why hasn’t China passed such a law just doing raw image upload of everyone’s iPhone already? Wouldn’t that be a lot easier than relying on this technology?

    It seems that China would gain a lot more, and have a lot more control over the result, if they did something like that. For example, if Apple couldn’t “refuse” a special neural hash “request” from China, why couldn’t they write a “bug” in the code that doesn’t report hash matches at all for China’s special hash? How would Chinese authorities know?

  29. But they can already do this as all files in iCloud Photo Library for Chinese users are on servers the Chinese government controls.

    Stepping back, I really don’t understand all this hand-wringing about China from the tech community. Has anyone who actually has experience of being targeted by China, or at least a foreign policy expert with actual knowledge of how the CCP carries out its surveillance, weighed in on this issue? All I’ve seen (from both sides, including myself) is people who have expertise in the tech world making guesses about what may or may not happen.

    I’ve not seen anything indicating that people actually affected by Chinese surveillance see Apple’s new system as a concern. It would be useful to understand abuses that this system could be used for, not hypotheticals that only seem realistic to people (like me!) who don’t really understand the domain they would operate in.

  30. With a little quick research, it appears that David is indeed more far knowledgeable in this area than I am, and probably more so than anyone else here. So I’ll defer to his opinions in that regard.

    But I do think that @jzw’s question is still a good one. @silbey, what’s your thinking on this?

  31. First off, this assumes that dissidents are merely trading the same photos with each other and that they aren’t constantly generating new content (which China would have to intercept and add to the database before this algorithm could detect it).

    Second, it assumes a massive misunderstanding about how Apple’s system works. The threshold isn’t some arbitrary number that phones look at. It’s an integral part of the cryptographic system. You can’t change it without breaking the entire database - which means it can’t be done without everybody else in the world finding out.

    You also assume that the matching can be done without Apple’s servers. Again, read the technical description. The algorithms prevent any code (whether on the phone or the server) from learning anything about the content of the security vouchers without Apple’s secret decryption keys.

    Finally, you assume that this is somehow going to be easier or more useful than using China’s existing surveillance software (which is constantly scanning live video from thousands of cameras nationwide) to scan through a database of not-encrypted photos.

    Additionally, the algorithm requires the server-side software to have Apple’s secret encryption key. I doubt they will every deploy this system to a country where someone else runs the server, since it would require divulging that key - undermining the entire system

    If the do deploy something in China, it will be a different system. But it makes no sense (for Apple or China) to do so. It is far more likely that China will order Apple to not enable E2E encryption for the Chinese iCloud databases, and they will do their own scanning with their own existing surveillance software.

    But the tech for this already existed, for quite a long time. The system Apple developed is far far more complicated than what would be necessary to satisfy a court order, especially since iCloud photo libraries are not encrypted.

    Law enforcement already orders Apple to turn over the contents of suspects’ iCloud backups and photo libraries, and Apple complies (when there is a warrant, of course). If they would choose to compel them to scan everybody’s photos for any arbitrary image provided by the FBI, the tech to do so is already been present. So we are already trusting Apple’s policy in this area.

  32. Two things I’ve not seen much discussion of but which do concern me about these upcoming CSAM detection systems:

    1. What stops a government from requiring Apple to scan all photos on an iPhone, regardless of iCloud Photo Library? It seems there’s no technical barrier, everything’s in place, it is only a policy that means they aren’t doing this. As clever as their convoluted encryption system is, it doesn’t seem to protect against this kind of mission creep/abuse.

    2. I’m surprised there isn’t more concern about the iMessage feature for children. The feature itself seems fine to me, but the principle of scanning messages before/after the E2E is dangerous. I can see the CIA and China being much more interested in this than the photo library stuff.

    It seems that with all the focus on the list of hashes, these issues are being lost, and I think they have the potential to significantly undermine privacy.

  33. These two concerns were always present, long before Apple’s announcement.

    Any app that has access to your photos can scan your on-device photos for anything it wants. This could be for benign reasons (e.g. Photos creating “Moments” for you to review) or hostile reasons. Your only safeguard is Apple’ App Store policy.

    Apple could always (going all the way back to very first iPhone) design and deploy new software and secretly push it into your phone. There is no technological mechanism that can prevent this short of hacking the phone to permanently disable all capability for software updates. You have always had to trust Apple’s intentions here.

    Ditto for all other smart phones. Google, Samsung, Motorola, LG and wireless carriers all push updates into their various phones and devices. When you use them, you have to trust them to not push objectionable software into your phone.

    Ditto for your computer. Unless you disconnect from the Internet and refuse all updates, Apple (or Microsoft, or Dell or Google) will push updates for their respective software. Although you can (usually) configure your computer to not automatically install updates, if you don’t trust the company with that ability, then you probably shouldn’t be trusting the disable switch either.

    Even if you go open source, you can’t eliminate that problem. Sure, the Linux community audits major packages all the time, but how many normal users actually know anything about the updates that they get from the standard Debian (or Red Hat or Ubuntu or whatever) distribution server? Again, you are trusting your distribution to do what they say they are doing.

    In other words, unless you take all your electronics completely off-grid, you are always implicitly trusting someone with matters of privacy and security.

    For most of us the answer to “how can I ensure that my system will remain secure and private against direct action from governments” is “forget it, kid”. But it may still be useful to ask the question “who is most likely to give in and who is most likely to resist”, knowing full well that everybody will fold if the pressure gets high enough.

  34. To add to @Shamino’s good summary, your phone is already scanning all photos and has been for years. That’s how you can search for pictures of cats or oak trees—it has to scan all photos and analyze their content.

  35. Why would you explain it to her again? They are scanning content on my device. That is a huge step down the slippery slope, no matter what auditability they claim.

    I don’t see how auditability changes anything regarding the slippery slope. All they have to do is change their mind. Sure, we might be able to find out that they changed their mind and expanded what they are scanning and what they are scanning for, but if the government demands that they scan encrypted content on a billion people’s phones, Apple cannot say “impossible” anymore. Apple just built an encryption back door into its devices. And it can change its capabilities at any time.

    By the way, I don’t know what auditable means, but being able to see whether the phone claims it is doing something is not auditing in my mind. And I didn’t see anything else in their report that seemed like the ability for me to audit this system.

  36. Surely the difference between providing on-device search vs government reporting isn’t meaningless to you.

    The question was related to the government.

    There is a reason people don’t mind and in fact prefer on-device scanning for search, but not for government reporting.

  37. I think the problem Apple is facing is that the most effective solutions to the problem of online child porn distributed by messaging services would to have to involve scanning Messages. And the company that singlehandedly, repeatedly and over decades dropped nuclear bombs on the digital media industry over privacy did not handle their news release effectively, is now dealing with a s—- storm, especially from the companies who are loosing vast amounts of revenue because of Apple’s recently upgraded anti tracking initiatives.

    If Apple had created a route that did not involve on device scanning but would be much less effective, like Facebook’s, they would have gotten criticized for not doing the same thing as everyone else sooner. They will not be scanning everyone’s iOS devices. they are only scanning images from the NCMC. And Apple has a unique and enviable history of not backing down when faced with government requests to break encryption.

  38. Because new information has arisen, so I need to update my thinking to accommodate it. If an opinion is based on incomplete information, or an incorrect understanding of the information, it should be reevaluated with new details in mind.

    Certainly. The point I was making is that “the iPhone is scanning your photos” has been true for a long time. A more precisely worded question might be “What prevents a government from requiring Apple to report matches for all photos on an iPhone, regardless of iCloud Photos?” And that then gets into the questions of what database of photos those matches would be against, and how Apple would create that database, and so on. It’s quite different.

    We put a lot of effort into using words precisely in TidBITS, and I feel that in the context of CSAM detection “scanning” is an ambiguous term that doesn’t lead to informed understanding. In the context of Communication Safety in Messages, it’s more accurate.

  39. From the article: “Stern extracted substantially more detail from Federighi about what Apple is and isn’t scanning and how CSAM will be recognized and reported.”

    Do you have a better word for evaluating every image before it is uploaded, running it through a program to evaluate if it matches other images?

    They will never have the images that NCEMC has. It is illegal for the NCEMC to give them to any other organization. So Apple actually can’t be scanning those images. I can’t think of any other word to use besides scanning, but Adam objects to it. Regardless, if you have iCloud turned on for photos, it will be checking every one of your images.

  40. I should have said hashes of images rather than images. The images Apple will be reviewing on iPhones will be morphed into “Neural Hashes.” I didn’t mention it when I dashed out my previous post because I assumed people contributing to this lengthy discussion would know what I was referring to. And Neural hashing is a far, far better thing to do.

  41. I’ve been leaning toward “matching” and trying to get the word “hash” in wherever I can. That’s because what Apple’s doing is creating a hash based on the image that’s being uploaded to iCloud Photos, and then comparing that hash against a database of hashes.

    The problem with “scanning” is that it’s used in other contexts to mean something quite different, and is thus causing confusion. It’s not like Apple is looking at the content of every image being uploaded to iCloud Photos and saying “The machine learning algorithm says that’s CSAM.” unlike how it looks at the content of every image to determine whether there’s a penguin or ship in it.

    That’s correct, Apple doesn’t have those images. Apple has worked with NCMEC to create a database of hashes to those images. That’s what’s being compared—the hashes for photos being uploaded to iCloud Photos against the hashes for known illegal CSAM that are the intersection of the NCMEC database and at least one other similar database under the jurisdiction of another government.

  42. At the risk of speaking for @ace:

    • There’s a difference between scanning for a particular kind of content (e.g. offensive images) and scanning for a specific set of well-known images without any kind of content identicfication.

    • Apple doesn’t have the NCMEC files. But they have the NeuralHash values for these files, which NCMEC computes and sends to Apple. The algorithms (both device-side and server-side) use this database to determine if an image is or is not one of the NCMEC files.

    Apple wrote a technical description (which I attempted to summarize) that explains exactly what is being done.

    A superbrief summary is:

    • Apple generates a “blinded hash” database from all of the NeuralHashes provided by NCMEC. This database is stored on iCloud servers and on all iPhones equipped with the software (distributed via iOS updates).
    • Your phone, as a part of the iCloud upload process, computes the NeuralHash of each image and generates a derivative image (a low-res version of the original) and encrypts them using two different algorithms (PSI and TSS). The contents of the blinded hash is used to generate the key used by the PSI algorithm. The encrypted data is called a security voucher and is uploaded with the image.
    • The nature of PSI is that the security voucher can not be decrypted unless both the image’s NeuralHash belongs to the blinded hash (meaning it’s in the CSAM database) and Apple’s secret key is known. Since your phone doesn’t have the secret key, it can not know if the image matches the database or not.
    • The nature of the TSS algorithm is that, after decrypting the PSI layer, the actual content can’t be viewed unless a threshold number of PSI-matching images have also been uploaded. Craig has said that the threshold is 30.
    • In order to prevent Apple from knowing how many matches have been uploaded before the threshold has been crossed, your phone also uploads synthetic vouchers, which match the PSI algorithm but always fail the TSS algorithm.

    The upshot of all this is:

    • The system can only detect NCMEC’s images (or basic transformations of them, like color-space changes, cropping, rotation, resizing), not other images, even if they are of similar subject matter.
    • Your phone doesn’t know if any images match the database
    • Apple doesn’t know if any images match the database until 30 matching images have been uploaded. Once there are 30, Apple can view the derivative images of the matches, but not of any other image.

    And, has also been mentioned several times, Apple will have humans review these derivative images, to make sure they really are CSAM and not false-positive matches, before law enforcement is notified.

  43. Wow, Apple Legal and Apple PR clearly don’t work anywhere near each other on the Apple campus.

    While it’s not inconceivable that Apple has a legitimate copyright concern over what Corellium is doing, the level of tone-deafness in pursuing this case at this particular point in time is astonishing.

  44. This lawsuit began long before the CSAM detection announcement. The article even says that the suit was filed in 2019.

    Apple has never allowed anyone to run iOS on a non-Apple device, which is the sole purpose for Corellium’s product. The fact that Corellium is now issuing press releases designed to use the CSAM detection announcement in order to sway public opinion is just a sleazy way to get the public to take sides on a lawsuit that most people really don’t care about.

  45. Many people care about real security research being possible on iOS, and I’d even go so far as stating that you should probably care.

    Edit: In case it’s not clear, Corellium’s product actually allows what Apple claims we can do. It doesn’t matter when Apple started suing them for allowing security research on iOS. It matters whether Apple is telling the truth about being able to audit the system. If Apple has been working since 2019 to prevent auditing iOS, then that only makes it worse. (In fact, Apple has been working against it for much longer than that.)

    This is not a press release. This is an action Apple just took. They decided to appeal the decision that allowed security researchers to audit iOS and the CSAM system.

    From the article:

    “Apple is exaggerating a researcher’s ability to examine the system as a whole,” says David Thiel, chief technology officer at Stanford’s Internet Observatory.

  46. Apple doesn’t let you run macOS on a PC. Being a security researcher doesn’t grant you an exception to the license, even though the court of public opinion might disagree.

    This is no different. The CSAM discussion is a distraction and has nothing to due with the merits of the suit.

  47. In other words, it doesn’t matter if Apple is lying about the system being auditable?

    Edit: And I never said it had anything to do with the merits of the suit. My contention is that Apple does not want or allow people to audit iOS, so contrary to their claims, the CSAM system is unlikely to be really auditable.

  48. Of course Apple did; the initial judgement left an important door wide open:

    “ Although the fair use ruling will help Corellium breathe a sigh of relief, an open question remains as to whether Corellium’s fair use victory is a hollow one because the court did not dismiss Apple’s claim that Corellium circumvented Apple’s own security measures to protect iOS code in violation of the Digital Millenium Copyright Act (DMCA).”

    The timing is right in this instance. My guess is they’ve been holding it in their pockets got the right moment once the negotiations to buy Corellium fell through.

  49. The system hasn’t even been released. So how do you know that the only possible way to audit it will be to run it in Corellium’s emulator?

    Maybe we should wait for other security researchers - those not in the middle of a 2+ year lawsuit with Apple - to get a chance to review the code before jumping to the conclusion that Apple is lying about the system being auditable.

  50. Umm… I quoted such a researcher. As did the article.

  51. But the open door issue was not directly addressed in this discussion, and it is an important consideration.

  52. The article quoted a researcher working for Corellium. Not exactly an objective source, since his company is the subject of ongoing litigation with Apple.

    Again, let’s wait and see what is actually released. Apple said it would be auditable by third parties. This statement implies that they will be publishing a procedure. I’d like to see what it is before concluding that there really isn’t one.

    If, after releasing this system, they fail to publish any procedure, or if the procedure is only available to people who have signed NDAs, then I will join you in your condemnation of Apple, but I’m going to wait until then before I come to such a decision.

  53. Are you alleging that “David Thiel, chief technology officer at Stanford’s Internet Observatory, the author of a book called iOS Application Security” is actually a researcher for Corellium?

    I’m sorry if I’m not keeping up, but I don’t know what you’re referring to.

  54. I couldn’t access the article; I had already hit the limit of 2 freebies for the site. Closed doors such as this is why I try to quote specifics in my posts.

  55. Thanks, Adam. The books make great gifts for any occasion: family, friends, the person you just met on the street.

    I think it’s the wrong question – it’s not why would China do it, it’s why wouldn’t China do it. Taking control that way ensures that they’re demonstrating their power, they’re intimidating the Chinese population, and they’re showing Apple who is in charge. It’s the default option to interfere.

    But to answer it the way it was asked, a couple of reasons, I think, would motivate them:

    1. They’re worried that they’re missing something in their iCloud scanning – this will be a way to get a look from a different direction.
    2. In general, intelligence gathering is always about multiple sources of information and access. One source can be unreliable or incomplete – having lots of ways to access things is useful. Even the oddest and most convoluted way of doing things can be useful if it helps verify/dispute something else (in 1982, the Russians were worried that a large NATO military exercise was a cover for a preemptive nuclear strike on them. Among other ways of gathering information, they called blood banks in the United States to see if America was stockpiling blood in preparation for a mass casualty event. That’s convoluted).
    3. Apple has deeper knowledge of how to surveil it’s own phones than anyone else. Using a tool they created gives China that advantage. No one better to search a house than the homeowner.

    (That photo of me may be a few years out of date.)

  56. One wonders whether this delay will affect the planned release of Apple’s other announced backdoor, the one that would screen opted-in youngsters’ Messages accounts for inappropriate sexual material.

    And almost an afterthought is this nugget from 9to5 Mac:
    “It was also revealed through this process that Apple already scans iCloud Mail for CSAM”

  57. From the 9to5 article it also states that “ Other child safety features announced by Apple last month, and also now delayed, include communications safety features in Messages and updated knowledge information for Siri and Search.” So yes it looks like it will.

  58. Yes, I think so. The scenario I initially found most likely is that the NSA (or whatever other alphabet agency working with/for NSA) subverts the CSAM database and Apple’s verifiers. This could be done without Apple executive’ knowledge; all they’d have to do is find out who these Apple employees are, tell them that they’d be heroes if they cooperate and go to jail (the average person commits three Federal crimes a day, so it wouldn’t be hard to trump something up, especially given the info NSA has on every on-the-grid American) if they refuse. This is the sort of tactic we already know them to do; some of the surveillance Snowden revealed was happening without company execs’ knowledge. Now that we know that another organization besides NCMEC is involved, this makes this scenario more difficult, but not impossibly so; USG just hacks/subverts/demands that the other organization add its chosen images to its database. Unless the other organization is in Russia or China, they’ll likely submit.

    But then I realized that there’s a far, far easier route than all this cloak-and-dagger stuff: the FBI presents Tim Cook with a National Security Letter, which makes clear that not complying or revealing anything will result in prison time, which simply orders Apple to report whatever images the FBI wants them to.

    What would they be looking for? Terrorist imagery. Memes shared by known terrorist or subversive groups. Anything that would help them identify terrorists, or terrorist sympathizers, or criminals, or subversives.

  59. Could you provide a source for this claim? Even this very article says that this is Apple’s “initial” match threshold, which implies that they could change it at any time.

  60. Aah, but there’s a crucial difference: Prior to this, Apple’s response to such efforts was, “we can’t and won’t build such backdoors into our system.” Now we know that they can and will.

  61. See my rather detailed summary of Apple’s technical summary document.

    This is the definition of how Threshold Secret Sharing (TSS) works. A shared secret is distributed in multiple cryptographically encoded pieces such that anyone with more than the threshold number of pieces can reconstruct the secret but anyone without the threshold number can not.

    The threshold is arbitrary, but it must be determined at the time the secret (in this case, the per-device encryption key for the security voucher content) is generated and split into pieces.

    It would be a completely useless technology if someone without the secret (meaning any software not running on your phone) could suddenly change the threshold.

    Of course, Apple could change the threshold, then push out an iOS update that forces your phone to re-generate and re-upload every security voucher.

    The same way they could choose to just ignore all this cryptographic pretense and just upload everything to a government database without telling anyone. Or how they could save themselves a lot of bad PR and not change anything, but simply grant governments secret backdoor access to all the photos they already have (without any encryption) stored on their servers.

    If you’re so concerned about how Apple could change these algorithms in the future, why aren’t you even more concerned about how they could do all this and much much worse with the data that is already stored on servers without any protection whatsoever?

    Ah, but in this case, no “backdoor” was ever necessary because all of the photos in question are already stored without encryption on Apple servers. Apple can and does grant law enforcement access to this data when presented with a warrant.

    If you’re afraid Apple will choose to (or be forced to) become evil, they can do what you’re afraid of without any of this incredibly complicated bit of cryptographic security algorithms.

    It’s nothing like prior claims of being unable to extract a device’s SSD encryption key from the secure element without knowing the device’s pass-code.

    Yes, there is a concern that Apple may change the software to start scanning and reporting images that aren’t uploaded to iCloud. To that I’ll just add that Apple has been doing on-device scanning for many many years already. How do you think it automatically makes albums based on who is in each photo, and generates “Moments” from your library?

    If you’re concerned about Apple abusing their CSAM-scanning technology, are you equally concerned about all of the other scanning that takes place on-device? Apple could just as easily subvert that code into government surveillance, but nobody has even mentioned that little bit.

    Once the discussion goes beyond “how can this software be abused” into “what could it do in the future if Apple changes it”, then you’re calling into question whether any part of any operating system can ever be trusted. That’s a completely separate discussion whose validity has not been changed by any of Apple’s recent announcements.

  62. Aren’t those albums and “moments” generated locally, on your device? Why do you think Apple downloads all the applicable data, processes it, then generates and pushes the albums and “moments”?

    Any other specific examples you can provide of Apple doing “on device scanning for many years” when that scanning is not controlled and managed by local software would be greatly appreciated.

  63. Apple has made this clear for many many years. There’s no expectation of privacy when you store your data in iCloud. They should have just started complete CSAM scans but instead are trying to monetize this new process under the guise of privacy, leaving their commitment to solving the big picture problem open to question, while reversing their privacy position ca 2016.

  64. Please explain how on earth Apple is “trying to monetize this new process.” Whatever you think about how Apple has developed CSAM, they have not mentioned charging users an extra fee for this. They will not be using using it to harvest audience data to sell advertising or to sell to access to users. They will not be eliminating App Tracking Transparency. But they will be introducing Private Relay in Safari, which will make Apple devices much more private than VPNs. And the new Mail Privacy Protection will eliminate tracking pixels in Mail:

    The sky is not falling. And there will be no revenue stream flowing from CSAM.

  65. I am concerned about that, but this thread is about the CSAM feature.

  66. OK you got me. I admit to some overblown rhetoric here; I have no specific knowledge of Apple’s inside machinations.

    But the timing of the announced release of the two backdoors was interesting. I saw absolutely no news reported by the media about how consumers were demanding these “features,” but there’s been plenty of press about the pressures being applied to Apple by the government (Congress) in general, pretty well across the board. It doesn’t take too much imagination to envision Apple upper level management sitting at the table with Congressional staffers horse-trading future “capabilities” (backdoors) in exchange for some relaxation on legislation affecting the App Store, for example. And that’s how I would expect any additional revenue stream would flow.

    If the sky is not falling, it is because enough people are concerned with Apple’s actions in this area that they’ve backed off to some extent.

  67. I am. It’s why I’m ready to turn off iCloud. It’s why I’m seriously considering switching to Linux. It’s why I’m looking into other phones.

    If you can’t understand the difference between answering government warrants vs going through your papers and calling up the government and tipping them off that they might want to get a warrant for this guy, I don’t think there’s much I can say to explain it to you.

    Now that Apple has created a method to collect that type of information and report its users to the government, obviously the rest of the on-device scanning is concerning. It’s so far the opposite of privacy, that every one of their other mitigations for privacy become meaningless.

  68. Apple is refusing to do this scanning on their own servers, and is forcing our devices to to do it with our processing power using our electricity and adding wear and tear to our batteries. This is clearly a large financial benefit at your and my expense.

  69. Precisely. The point is that by adding this feature, they’ve tipped their hand. That’s why there’s such a big uproar.

  70. This is because complaints about child porn initiated by parents, teachers, and other parties need to be reported to law enforcement, regulatory agencies and nonprofits before action is considered. That’s why the National Center For Missing And Exploited Children, government agencies, etc. exist. The powers that be need to be involved before journalists can cover a story, or they could face libel action.

    https://www.firstcoastnews.com/article/news/crime/terry-parker-high-school-teacher-arrested-for-distributing-child-porn/77-606107432

    https://netsanity.net/apple-ios-12-parental-controls-getting-better/

    https://www.govinfo.gov/content/pkg/CHRG-111shrg66284/html/CHRG-111shrg66284.htm

  71. Given that Apple has said it is delaying the launch of these technologies, there’s no point in continuing to debate what may or may not have been true of the earlier announcement.

Join the discussion in the TidBITS Discourse forum

Participants

Avatar for ace Avatar for jcenters Avatar for Simon Avatar for tommy Avatar for silbey Avatar for jzw Avatar for mark4 Avatar for neil1 Avatar for ddmiller Avatar for jtbayly Avatar for xdev Avatar for MMTalker Avatar for calion Avatar for Shamino