Apple has given more hints on how it’s planning to approach its iCloud Photo scanning for child sexual abuse material (CSAM) via users’ iPhones and iPad. A recent paper released by the company showed how it’s delving into the safeguards that it hopes will help increase user trust in the initiative.
The company hopes to use its rule to only flag images that have been found in multiple child safety databases with different government affiliations which in theory will stop one country from adding non-CSAM content to the system.
In a matter of weeks, Apple will debut the next-gen iOS and the iPadOS both of which will automatically match US-based iCloud Photos accounts against known CSAM from a list of image hashes compiled by child safety groups.
The new development has led to criticisms from various cryptography and privacy experts. The paper, called “Security Threat Model Review of Apple’s Child Safety Features,” hopes to allay privacy and security concerns around that rollout. It builds on a Wall Street Journal interview with Apple executive Craig Federighi, who outlined some of the information this morning.
- Advertisement -
Apple made it known in the document that it will not rely on a single government-affiliated database like that of the US-based National Center for Missing and Exploited Children or NCMEC in order to identify CSAM.
The company aims to only match pictures from at least two groups with different national affiliations with the goal being that no single government will have the power to secretly insert unrelated content for censorship purposes since there is a need for match between more than one database.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/22781290/Screen_Shot_2021_08_13_at_3.35.42_PM.png?w=1170&ssl=1)
Apple has referenced the potential use of multiple child safety databases, but until today, it hadn’t explained the overlap system. In a call with reporters, Apple said it’s only naming NCMEC because it hasn’t yet finalized agreements with other groups.
The official paper further confirms earlier claims by Federighi which stated that Apple will only flag an iCloud account if it identifies 30 images as CSAM. The number is picked in order to provide a drastic safety margin as well as to avoid false positives according to the paper.
Future changes may be made to the threshold depending on the system’s performance in the real world.
It also provides more information on an auditing system that Federighi mentioned. Apple’s list of known CSAM hashes will be baked into iOS and iPadOS worldwide, although the scanning system will only run in the US for now.
The company will further provide a full list of hashes that auditors can check against child safety databases. There is another method mentioned to make sure that it’s not secretly matching more images.
Federighi acknowledged that Apple had introduced “confusion” with its announcement last week. But Apple has stood by the update itself — it tells reporters that although it’s still finalizing and iterating on details, it hasn’t changed its launch plans in response to the past week’s criticism.