- Lawrence Ng
Apple’s Hashing System That Detects Child Sexual Abuse Could Also Target Political Activists
Updated: Aug 21, 2021
Apple recently announced that along with other safety measures, it will use a hashing system to detect Child Sexual Abuse Material (CSAM) in users’ iCloud Photos. In return, this will let the company report instances of child sexual exploitation to the National Center for Missing and Exploited Children (NCMEC).
But as Matthew Green — an Associate Professor at the Johns Hopkins Information Security Institute and a cryptography expert — noticed, the hashing algorithms Apple will use for this initiative could lead to false positives as it could tag harmless political photos as harmful content.
The tech company said that it will use cryptography to prevent CSAM from spreading across online platforms, all while promising to uphold user privacy. By detecting CSAM, Apple’s algorithms will be able to share helpful information to authorities about cases of child abuse.
Apple further explained this solution in a blog post.
"Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organisations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices," wrote the tech giant.
Green explained that these tools can easily be exploited. He clarified that the system will let Apple scan your device’s media gallery, scraping for photos that match a given hash that will supposedly identify whether you own sensitive content. If you have several images with that hash, then your pictures will be reported to Apple servers.
Green added that authoritarian governments can turn the technology into a surveillance mechanism. With this scenario in mind, he fears that the algorithms’ producing false matches could hamper political activism.
"I mentioned that these perceptual hash functions were 'imprecise'. This is on purpose. They’re designed to find images that look like the bad images, even if they’ve been resized, compressed, etc. This means that, depending on how they work, it might be possible for someone to make problematic images that 'match' entirely harmless images. Like political images shared by persecuted groups. These harmless images would be reported to the provider (Apple)," said Green.
Credit: Zhiyue Xu / Unsplash
On its website, Apple shared a technical assessment of the CSAM detection system, arguing that the probability of its algorithms reporting a false match is "negligible". Interestingly, the paper — written by Benny Pinkas, a Cryptography Professor at Bar-Ilan University — noted that users will not be alerted in times when their photos match those part of the CSAM database, which stores images that Apple will use to detect child abuse.
Written by Sophia Lopez