top of page
  • Lawrence Ng

Apple’s Hashing System That Detects Child Sexual Abuse Could Also Target Political Activists

Updated: Aug 21, 2021

Apple recently announced that along with other safety measures, it will use a hashing system to detect Child Sexual Abuse Material (CSAM) in users’ iCloud Photos. In return, this will let the company report instances of child sexual exploitation to the National Center for Missing and Exploited Children (NCMEC).



But as Matthew Green — an Associate Professor at the Johns Hopkins Information Security Institute and a cryptography expert — noticed, the hashing algorithms Apple will use for this initiative could lead to false positives as it could tag harmless political photos as harmful content.


The tech company said that it will use cryptography to prevent CSAM from spreading across online platforms, all while promising to uphold user privacy. By detecting CSAM, Apple’s algorithms will be able to share helpful information to authorities about cases of child abuse.

Apple further explained this solution in a blog post.


"Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organisations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices," wrote the tech giant.

Green explained that these tools can easily be exploited. He clarified that the system will let Apple scan your device’s media gallery, scraping for photos that match a given hash that will supposedly identify whether you own sensitive content. If you have several images with that hash, then your pictures will be reported to Apple servers.


Green added that authoritarian governments can turn the technology into a surveillance mechanism. With this scenario in mind, he fears that the algorithms’ producing false matches could hamper political activism.

"I mentioned that these perceptual hash functions were 'imprecise'. This is on purpose. They’re designed to find images that look like the bad images, even if they’ve been resized, compressed, etc. This means that, depending on how they work, it might be possible for someone to make problematic images that 'match' entirely harmless images. Like political images shared by persecuted groups. These harmless images would be reported to the provider (Apple)," said Green.

Credit: Zhiyue Xu / Unsplash


On its website, Apple shared a technical assessment of the CSAM detection system, arguing that the probability of its algorithms reporting a false match is "negligible". Interestingly, the paper — written by Benny Pinkas, a Cryptography Professor at Bar-Ilan University — noted that users will not be alerted in times when their photos match those part of the CSAM database, which stores images that Apple will use to detect child abuse.

 

Written by Sophia Lopez

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page