Skip to content Skip to footer

Momentum’s position on the EU proposal to prevent and combat child sexual abuse

Protecting children from sexual abuse is an urgent priority. However, the proposed EU Regulation (COM(2022) 209) risks overreaching in ways that could undermine fundamental rights and create a precedent for communications scanning that later expands to other purposes.

We outline a safer, more proportionate alternative that protects children, while preserving privacy, free expression, and trust in personal devices.

This position paper was authored by Prof. Christian Colombo in collaboration with the Momentum Policy team.

1. Overreach and mission creep

The proposal introduces binding detection orders that require providers to scan private communications for known and new child sexual abuse material (CSAM) and for grooming. Even as the Commission stresses that it is not imposing a general monitoring duty, the model comes very close to that line. The Commission itself frames the proposal as an exception to the long-standing prohibition on general monitoring by Member States, saying it aims to comply by narrowing scope and adding safeguards:

“The e-Commerce Directive and the DSA prohibit Member States from imposing on providers of intermediary services general obligations to monitor… The proposed Regulation aims to comply… by targeting the scope of the obligations imposed on providers at risk and by setting out a set of clear and carefully balanced rules and safeguards.” 

If the infrastructure and legal basis for proactive scanning exist, there is a real risk of gradual expansion to other categories over time. History teaches that once such technical and legal capabilities are built, pressure grows to reuse them for other harms, then for controversial content, then for dissent.

2. Turning devices into self-monitoring tools

While the text emphasises anonymity and safeguards, the practical effect for end-to-end encrypted services is client-side scanning. The Commission says reviews should be anonymous and identity revealed only if content is flagged:

“Perform any necessary review on an anonymous basis and only take steps to identify any user in case potential online child sexual abuse is detected.” 

At the same time, it allows detection to be required regardless of the technology in use, explicitly including end-to-end encryption:

“Those measures should be taken regardless of the technologies used… That includes the use of end-to-end encryption technology.” 

While the proposal says reviews should be done “on an anonymous basis” and users only identified if content is flagged, it also allows detection to be required “regardless of the technologies used,” explicitly including end-to-end encrypted services. For such services the only practical place to scan (without weakening encryption) is the user’s device. In effect, this risks conscripting personal devices into self-monitoring tools. People should be able to trust that their own devices serve them, not quietly evaluate and report their private content.

3. Our objectives, scope, and guiding principles

We want a system that protects children decisively and respects everyone’s private communications. To achieve both goals, our proposal follows three principles.

  1. Focus protection where risk is highest. Children deserve default-on protections that reduce exposure to abuse without surveilling the general population.
  2. Address criminal distribution without building universal scanners. Adult-to-adult sharing of child sexual abuse material is serious crime, but it should be countered through targeted enforcement, removal at the source, and infrastructure disruption, not perpetual content checks in private messaging.
  3. Lock out mission creep by design. Capabilities that evaluate private content at scale invite repurposing against critics, opposition politicians, and investigative journalists. Our approach avoids building those capabilities altogether.

The next sections translate these principles into concrete measures for child safety and for tackling criminal distribution without general content monitoring.

4. Proposal part A: receiver-side protections for children, on by default

Make child accounts safe by default, without scanning everyone.

  • Enforce default-on protections for child accounts. Blur all images from unknown contacts until tapped. Block links to file-sharing and anonymised hosts unless a guardian whitelists them. Refuse auto-download of media. Require explicit consent for joining groups and channels, with rate limits and friction for invitations by adults outside the contact list.
  • Detect solicitation only in child contexts. Apply server-side and on-device classifiers for grooming patterns only when one party is a verified child account. Use strict thresholds, human verification, and guardian notifications for high-risk events. Give minors clear, age-appropriate controls to mute, block, and report with one tap.
  • Create privacy-preserving evidence packages on the child’s device. When a high-risk interaction occurs, generate a sealed bundle containing minimal proofs of what happened, rather than raw content. Allow guardians, hotlines, and law enforcement to unlock only with multilayer consent or a judicial order. Use cryptographic aggregation across many child accounts to find repeat offenders and hotspots without revealing individual conversations.
  • Give victims a priority channel with a human case manager and strict response targets to remove material at the source, generate a robust fingerprint to stop re-uploads on the same platform, and send standard de-indexing requests to major search engines. Provide a private dashboard showing actions taken, deadlines, and appeal options. Minimise exposure and data collection, allow pseudonymous contact, and offer translation and accessibility support. With consent, package evidence for law enforcement or hotlines so the victim is not retraumatised by repeated submissions.
  • Introduce privacy-preserving age verification at scale. Use zero-knowledge proofs and verifiable credentials to assert “age over” attributes without revealing date of birth or identity. Let users present short-lived, unlinkable age tokens from trusted issuers, including national e-ID wallets, schools, or licensed identity providers. Keep issuers and verifiers separate so the verifier learns only that the threshold is satisfied, and the issuer learns nothing about where or when a proof was presented. Make age gates proportional by gating high-risk features such as group discovery, media auto-download, and links from unknown contacts, not basic messaging with parents or guardians.

Why this is proportionate. It concentrates detection and friction where risk is highest, it avoids putting all adults under suspicion, and it preserves the essence of private communications for the general population. The Regulation itself requires detection orders to be risk-based, targeted, and strictly necessary, including limiting measures to parts of a service or specific user groups where possible. 

5. Proposal part B: tackle adult-to-adult distribution without general content monitoring

We reject scanning of adults’ private messages, including any variant of client-side checks for “known” material. Instead, disrupt distribution through targeted, rights-respecting measures that do not create always-on scanning pipelines.

  • Remove at the source, fast. Strengthen obligations on hosting providers and public platforms to remove or disable access to identified CSAM swiftly, with clear orders, standardised templates, redress, and tight deadlines. This acts at the point of availability, without inspecting private messages.
  • Block at the network edge when removal fails. As a last resort for non-EU hosts that refuse cooperation, allow narrowly scoped ISP-level blocking based on a centrally curated list of specific URLs verified by the EU Centre, with strong oversight and redress. This targets access to known illegal locations, not private communications.
  • Targeted investigations, not mass scanning. Resource law enforcement to pursue court-authorised, case-by-case operations against suspected producers and distributors, including device searches under warrant, undercover work in distribution hubs, and cross-border coordination through Europol.
  • Frictionless human reporting with protection. Give adults clear, safe in-app reporting routes to hotlines and the EU Centre, with strong safeguards against retaliation and with transparent outcomes. The Regulation already foresees provider reporting, which can be triggered by user flags or provider awareness, without mandating general scanning. 

Why this is acceptable: It addresses the adult-to-adult distribution problem by taking down availability, blocking stubborn sources, and pursuing suspects with due process. It does not introduce a universal content checkpoint inside private messaging.

6. What we explicitly reject

  • No client-side scanning of adults’ messages, no classifier-based detection of new CSAM on adult accounts, no hash-matching in private encrypted channels. These would create a general content evaluation pipeline that is easily repurposed.
  • No broad detection orders that functionally cover all messages in an encrypted service. If a detection order affects a large fraction of ordinary messages, it fails the necessity and proportionality test.

Conclusion

The EU “chat control” proposal is motivated by the right concern, but it chooses a path that risks normalising proactive surveillance of private communications. A better path exists. Protect children by hardening the receiver side and making child accounts safe by default. Tackle adult-to-adult distribution through targeted enforcement, rapid removal at source, narrow network blocking when removal fails, and strong oversight. Reject any technique that evaluates adults’ private messages. These measures protect children while respecting the privacy and freedoms that underpin a democratic society.

This position paper was authored by Prof. Christian Colombo in collaboration with the Momentum Policy team.

There is hope, you can help!

Join Momentum and build a better Malta. Volunteer, donate, or subscribe today!

What's your reaction?
0Smile0Lol0Wow0Love0Sad0Angry