Child pornography and other types of sexual abuse of children are unquestionably heinous crimes; those who participate in them should be caught and severely punished. But some recent efforts to combat these scourges have gone a good ways down the path toward a kind of AI-driven digital panopticon that will invade the privacy of everyone in order to try to catch people who are violating laws prohibiting those activities. It is thus no surprise that privacy advocates are up in arms about an Apple plan to scan iPhone messages and an EU measure to allow companies to scan private messages, both looking for "child sexual abuse material" (CSAM). As with many things of this nature, there are concerns about the collateral damage that these efforts will cause—not to mention the slippery slope that is being created.
iPhone scanning
Apple's move to scan iPhone data has received more press. It would check for image hashes that match known CSAM material; the database of hashes will be provided by the National Center for Missing and Exploited Children (NCMEC). It will also scan photos that are sent or received in its messaging app to try to detect sexually explicit photos to or from children's phones. Both of those scans will be done on the user's phone, which will effectively break the end-to-end encryption that Apple has touted for its messaging app over the years.
Intercepted messages that seem to be of a sexual nature, or photos that
include nudity, will result in a variety of interventions, such as blurring
the photo
or warning about the content of the message. Those warnings will also
indicate that the user's parents will be informed; the feature is only
targeted at phones that are designated as being for a child—at least for
now. The general photo scanning using the NCMEC hashes has a number of
safeguards to try to prevent false positives; according to Apple, it
"ensures less than a one in one trillion chance per year of
incorrectly flagging a given account
". Hash matches are reported to
Apple, but encrypted as "safety vouchers" that can only be opened after
some number of matching messages are found:
Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.
The Siri voice-assistant and the iPhone Search feature are also being updated to check for CSAM-related queries, routing requests for help reporting abuse to the appropriate resources, while blocking CSAM-oriented searches:
Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.
The Electronic Frontier Foundation (EFF) is, unsurprisingly, disappointed with Apple's plan:
We’ve said it before, and we’ll say it again now: it’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses.$ sudo subscribe todaySubscribe today and elevate your LWN privileges. You’ll have access to all of LWN’s high-quality articles as soon as they’re published, and help support LWN in the process. Act now and you can start with a free trial subscription.
All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.
The EFF post goes on to point to recent laws passed in some countries that could use the Apple backdoor to screen for other types of content (e.g. homosexual, satirical, or protest content). Apple could be coerced or forced into extending the CSAM scanning well beyond that fairly limited scope. In fact, this kind of expansion has already happened to a certain extent:
We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society. While it’s therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content as “terrorism,” including documentation of violence and repression, counterspeech, art, and satire.
For its part, Apple has released a FAQ that says it will refuse any demands by governments to expand the photo scanning beyond CSAM material. There is, of course, no technical way to ensure that does not happen. Apple has bowed to government pressure in the past, making some leery of the company's assurances. As Nadim Kobeissi put it:
Reminder: Apple sells iPhones without FaceTime in Saudi Arabia, because local regulation prohibits encrypted phone calls. That's just one example of many where Apple's bent to local pressure.What happens when local regulation mandates that messages be scanned for homosexuality?
It is interesting to note that only a few years ago, Apple itself was making arguments against
backdoors with many of the same points that the EFF and many other organizations and
individuals have made. As Jonathan Mayer pointed
out:
"Just 5 years ago, Apple swore in court filings that if it built a
capability to access encrypted data, that capability would be used far
beyond its original context.
"
EU scanning
Meanwhile in the EU, where data privacy is supposed to reign
supreme,
the "ePrivacy derogation" is potentially even more problematic. It
allows communication-service providers to "monitor interpersonal
communications on a voluntary basis with the purpose of detecting and
reporting material depicting sexual abuse of minors or attempts to groom
children
". It is, of course, not a huge leap from "voluntary" to
"mandatory". As might be guessed, the scanning will not be done directly
by humans—problematic in its own right—but by computers:
The scanning of private conversations will be conducted through automated content recognition tools, powered by artificial intelligence, but under human oversight. Service providers will also be able to use anti-grooming technologies, following consultation with data protection authorities.
The EU General Data Protection Regulation (GDPR) is a sweeping framework for protecting personal data, but since the start of 2021 it no longer covers messaging services. That kind of communication falls under the ePrivacy directive instead, thus the change allowing scanning is a derogation to it. Patrick Breyer, member of the EU Parliament, has criticized the derogation on a number of grounds. He lists a number of different problems with it, including:
- All of your chat conversations and emails will be automatically searched for suspicious content. Nothing remains confidential or secret. There is no requirement of a court order or an initial suspicion for searching your messages. It occurs always and automatically.
- If an algorithms classifies the content of a message as suspicious, your private or intimate photos may be viewed by staff and contractors of international corporations and police authorities. Also your private nude photos may be looked at by people not known to you, in whose hands your photos are not safe.
[...]
- You can falsely be reported and investigated for allegedly disseminating child sexual exploitation material. Messaging and chat control algorithms are known to flag completely legal vacation photos of children on a beach, for example. According to Swiss federal police authorities, 86% of all machine-generated reports turn out to be without merit. 40% of all criminal investigation procedures initiated in Germany for “child pornography” target minors.
As Breyer pointed out, there is already proposed
legislation to make the scanning mandatory, which would break
end-to-end encryption: "Previously secure end-to-end encrypted
messenger services such as Whatsapp or Signal would be forced to install a
backdoor.
"
"Safety" vs. privacy
Both of these plans seem well-intentioned, but they are also incredibly
dangerous to privacy. The cry of "protect the children" is a potent
one—rightly so—but there also need to be checks and balances or the risks to
both children and adults are far too high. Various opponents (who were
derided as "the screeching voices of the minority
" by the
NCMEC in a memo to Apple employees) have noted that these kinds of measures
can actually harm the victims of these crimes. In addition, they presuppose
that everyone is guilty, without the need for warrants or the like, and turn over
personal data to companies and other organizations before law enforcement is even in the
picture.
As with many problems in the world today, the sexual abuse of children seems an insurmountable one, which makes almost any measure that looks likely to help quite attractive. But throwing out the privacy of our communications is not a sensible—or even particularly effective—approach. These systems are likely to be swamped with reports of completely unrelated activity or of behavior (e.g. minors "sexting" with each other) that is better handled in other ways. In particular, Breyer has suggestions for ways to protect children more effectively:
The right way to address the problem is police work and strengthening law enforcement capacities, including (online) undercover investigations, with enough resources and expertise to infiltrate the networks where child sexual abuse material is distributed. We need better staffed and more specialized police and judicial authorities, strengthening of resources for investigation and prosecution, prevention, support, training and funding support services for victims.
There have long been attacks against encryption in various forms, going back to (at least) the crypto wars in the 1990s. To those of us who lived through those times, all of this looks an awful lot like a step back toward the days of the Clipper chip, with its legally mandated crypto backdoor, and other efforts of that sort. Legislators and well-meaning organizations are seemingly unable to recognize that a backdoor is always an avenue for privacy abuse of various kinds. If it requires screeching to try to make that point—again—so be it.
| Index entries for this article | |
|---|---|
| Security | Privacy |