Apple Privacy Head on Child Safety & Message Security - Interview

Apple’s New Child Safety Features: A Deep Dive
Last week, Apple unveiled a suite of new features designed to enhance child safety across its devices. While these features are not yet live, they are scheduled for release later this year. The core objectives – protecting minors and curbing the proliferation of Child Sexual Abuse Material (CSAM) – are universally acknowledged as vital. However, the methodologies employed by Apple have sparked some debate.
I had the opportunity to speak with Erik Neuenschwander, Apple’s Head of Privacy, regarding these forthcoming features. He provided comprehensive responses to many of the concerns raised and discussed the potential tactical and strategic challenges that may arise once the system is implemented.
The rollout encompasses three distinct, yet interconnected, systems, which have been subject to some misinterpretation in both media coverage and public understanding.
CSAM Detection in iCloud Photos
A detection mechanism, termed NeuralHash, generates identifiers for comparison with IDs from organizations like the National Center for Missing and Exploited Children. This system aims to identify known CSAM content within iCloud Photo libraries. Unlike most cloud providers who perform scanning in the cloud, Apple’s approach conducts matching directly on the device.
Communication Safety in Messages
This feature, activated by a parent for a minor within their iCloud Family account, alerts children when an image they are about to view is detected as explicit. It also informs them that the parent will be notified.
Interventions in Siri and Search
This feature will intervene when a user attempts to search for CSAM-related terms via Siri or search, providing information about the intervention and offering relevant resources.
For further details, you can consult our related articles or Apple’s recently published FAQ.
Many individuals struggle to differentiate between the CSAM detection and Communication Safety systems, or worry about potential scrutiny of innocent photos triggering false positives. It’s crucial to understand that these are entirely separate systems. CSAM detection focuses on precise matches with content already identified as abusive imagery by established organizations. Communication Safety in Messages operates entirely on the device and does not report any data externally; it simply flags potentially explicit images to the child. This feature requires parental opt-in and transparency for both parent and child.
Apple’s Communication Safety in Messages feature. Image Credits: AppleQuestions have also been raised regarding the on-device hashing of photos to create identifiers for database comparison. While NeuralHash could be utilized for other features like faster photo search, it is currently exclusive to CSAM detection. The feature ceases to function entirely when iCloud Photos is disabled, offering an opt-out, albeit at the cost of convenience and integration with Apple’s operating systems.
This interview represents the most extensive on-the-record discussion with a senior Apple privacy member regarding these new features. Apple appears confident in its solution, as evidenced by its willingness to provide access and ongoing briefings.
Despite concerns and resistance, Apple seems prepared to dedicate the necessary time to address them and demonstrate the effectiveness of its approach.
The following is a lightly edited transcript of the interview.
Interview Transcript
TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. There are no current regulations mandating this, but regulations are evolving in the EU and elsewhere. Is this the driving force behind these features? Why now?
Erik Neuenschwander: The timing is due to the development of technology that can effectively balance strong child safety with user privacy. We’ve been exploring this area for some time, considering existing techniques that often involve scanning entire user libraries on cloud services – something we’ve always avoided for iCloud Photos. This system doesn’t change that; it doesn’t scan data on the device or all photos in iCloud Photos. Instead, it allows us to identify accounts accumulating collections of known CSAM.
So, the development of this new CSAM detection technology is the key factor in launching these features. And Apple believes it can do so in a way that is both comfortable and beneficial for its users?
That’s precisely right. We have two equally important goals: improving child safety and preserving user privacy. We’ve successfully integrated technologies that allow us to achieve both across all three features.
Announcing Communication Safety in Messages and CSAM detection in iCloud Photos simultaneously seems to have caused confusion regarding their capabilities and goals. Was this a strategic decision? And why were they announced together, given their distinct nature?
While they are separate systems, they are interconnected with our enhanced interventions in Siri and search. Identifying existing CSAM collections is crucial, but it’s equally important to proactively address the issue. CSAM detection deals with content that has already been reported and widely shared, re-victimizing children. Communication Safety in Messages and our interventions in Siri and search aim to disrupt the cycles that lead to CSAM, intervening earlier when individuals begin exploring harmful areas or when abusers attempt to groom or exploit children. We’re striving to disrupt the entire process.
The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: AppleGovernments and agencies worldwide are pressuring large organizations with encryption to provide access for law enforcement, often citing CSAM and terrorism as justification. Is launching this feature with on-device hash matching an attempt to preempt these requests and demonstrate that Apple can provide necessary information without compromising user privacy?
First, regarding the device matching, I want to emphasize that the system, as designed, doesn’t reveal match results to the device or even to Apple through the vouchers it creates. Apple cannot process individual vouchers; it only learns about an account when it accumulates a collection of vouchers associated with illegal CSAM images. This approach allows for detection while preserving user privacy. We are motivated by the need to enhance child safety across the digital ecosystem, and all three features represent positive steps in that direction, while leaving privacy undisturbed for those not engaged in illegal activity.
Does creating a framework for scanning and matching on-device content open the door for external law enforcement to request the addition of non-CSAM content to the database? How does this not undermine Apple’s stance on encryption and user privacy?
It doesn’t alter our position on encryption in any way. The device remains encrypted, we still don’t have the key, and the system is designed to function on on-device data. The device-side component is actually more privacy-protective than processing data on a server. Our system involves both an on-device component where the voucher is created, and a server-side component where that voucher is processed across the account. The voucher generation is what enables us to avoid processing all user content on our servers, something we’ve never done for iCloud Photos.
Apple has stated it will reject requests to compromise the system by adding non-CSAM content to the database. However, Apple has previously complied with local laws, such as in China. How can we trust Apple to uphold this rejection of interference when faced with government pressure?
Firstly, this launch is initially limited to U.S. iCloud accounts, and the hypothetical scenarios often involve countries where U.S. law doesn’t apply. Even in cases of attempted system modification, several protections are built in. The hash list is integrated into the operating system, and we don’t have the ability to target updates to individual users. Furthermore, the system requires a threshold of images to be exceeded, making it ineffective for targeting single images. Finally, a manual review process ensures that any flagged account is thoroughly vetted before referral to external entities.
The FAQ states that disabling iCloud Photos disables the system. Does this mean the system stops creating hashes of your photos on the device, or is it completely inactive?
If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection relies on NeuralHash comparing against a database of known CSAM hashes within the operating system image. None of this functionality operates if iCloud Photos is disabled.
Apple has often emphasized that on-device processing preserves user privacy. However, in this case, it seems the scanning is for external use cases rather than personal use, potentially creating a ‘less trust’ scenario. Given that other cloud providers scan on their servers, why should this implementation engender more trust?
We are raising the bar compared to industry standards. Server-side algorithms processing all user photos pose a greater risk of data disclosure and are less transparent. By integrating this into our operating system, we leverage the same security properties as other features, with a single global operating system for all users. This makes it more challenging to target individual users compared to server-side processing. Furthermore, on-device processing is inherently more privacy-preserving.
We can confidently state that this system leaves privacy undisturbed for all users not involved in illegal activity, and Apple gains no additional knowledge about user cloud libraries. Instead, we create cryptographic safety vouchers, ensuring that Apple can only decrypt and learn about images from accounts collecting known CSAM hashes. This is a significant advantage over cloud processing, where every image is processed in clear text.
Can this CSAM detection feature remain secure if the device is physically compromised?
It’s important to acknowledge that a successful device compromise is a rare and challenging event. The protection of data on the device is paramount. If such an attack occurs, the attacker could potentially access a significant amount of user data. The idea that their primary goal would be to trigger a manual review of an account doesn’t seem logical. Even if the threshold is met, a manual review is required to confirm the presence of illegal CSAM material before any referral.
Why is there a threshold of images for reporting? Isn’t one piece of CSAM content too many?
We aim to ensure that reports to NCMEC are high-value and actionable. The threshold allows us to minimize false positives and maintain a high level of confidence in the accuracy of our reports. It allows us to achieve a false reporting rate of one in one trillion accounts per year.
Related Posts

Ring AI Facial Recognition: New Feature Raises Privacy Concerns

FTC Upholds Ban on Stalkerware Founder Scott Zuckerman

Intellexa Spyware: Direct Access to Government Espionage Victims

India Drops Mandatory App Pre-Installation After Backlash

Google's AI Advantage: Leveraging User Data
