this post was submitted on 28 Nov 2025
88 points (100.0% liked)

Europe

8144 readers
508 users here now

News and information from Europe ๐Ÿ‡ช๐Ÿ‡บ

(Current banner: La Mancha, Spain. Feel free to post submissions for banner images.)

Rules (2024-08-30)

  1. This is an English-language community. Comments should be in English. Posts can link to non-English news sources when providing a full-text translation in the post description. Automated translations are fine, as long as they don't overly distort the content.
  2. No links to misinformation or commercial advertising. When you post outdated/historic articles, add the year of publication to the post title. Infographics must include a source and a year of creation; if possible, also provide a link to the source.
  3. Be kind to each other, and argue in good faith. Don't post direct insults nor disrespectful and condescending comments. Don't troll nor incite hatred. Don't look for novel argumentation strategies at Wikipedia's List of fallacies.
  4. No bigotry, sexism, racism, antisemitism, islamophobia, dehumanization of minorities, or glorification of National Socialism. We follow German law; don't question the statehood of Israel.
  5. Be the signal, not the noise: Strive to post insightful comments. Add "/s" when you're being sarcastic (and don't use it to break rule no. 3).
  6. If you link to paywalled information, please provide also a link to a freely available archived version. Alternatively, try to find a different source.
  7. Light-hearted content, memes, and posts about your European everyday belong in other communities.
  8. Don't evade bans. If we notice ban evasion, that will result in a permanent ban for all the accounts we can associate with you.
  9. No posts linking to speculative reporting about ongoing events with unclear backgrounds. Please wait at least 12 hours. (E.g., do not post breathless reporting on an ongoing terror attack.)
  10. Always provide context with posts: Don't post uncontextualized images or videos, and don't start discussions without giving some context first.

(This list may get expanded as necessary.)

Posts that link to the following sources will be removed

Unless they're the only sources, please also avoid The Sun, Daily Mail, any "thinktank" type organization, and non-Lemmy social media (incl. Substack). Don't link to Twitter directly, instead use xcancel.com. For Reddit, use old:reddit:com

(Lists may get expanded as necessary.)

Ban lengths, etc.

We will use some leeway to decide whether to remove a comment.

If need be, there are also bans: 3 days for lighter offenses, 7 or 14 days for bigger offenses, and permanent bans for people who don't show any willingness to participate productively. If we think the ban reason is obvious, we may not specifically write to you.

If you want to protest a removal or ban, feel free to write privately to the primary mod account @EuroMod@feddit.org

founded 2 years ago
MODERATORS
 

While welcoming voluntary CSAM scanning, scientists warn that some aspects of the revised bill "still bring high risks to society without clear benefits for children."

you are viewing a single comment's thread
view the rest of the comments
[โ€“] simon@slrpnk.net 2 points 3 weeks ago (1 children)

Is there any clarity about what the future with chat control will look like? As in what exactly apps will need to implement.

This part about self evaluation confuses me:

Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to "implement mitigating measures to counter that risk," the Council notes.

I assume all chat apps would have to take measures, since generic data can be sent through them, including CSAM. Or could this quote be interpreted otherwise? I wonder what exactly is meant by voluntary then.

Does this "mitigating measure" in practice mean sending a hash of each image sent through the messenger to some service built by Google or Apple for comparison against known CSAM? Since building a database of hashes to compare with is only realistically possible for the largest corporations. Or would the actual image itself have to leave the device, since it could be argued that some remote AI could identify any CSAM, even if it is not yet in any database? Perhaps some locally running AI model could do a decent enough job, so that nothing has to leave the device during the evaluation stage.

But then again, there will always be false positives, where an innocent person's image would be uploaded to... the service provider (like Signal) for review? So you could never be sure that your communication stays private, since the risk of false positives is always there. Regardless of what the solution is, the user will have to give up fully owning there device, since this chat control service can always decide to take control of your device to upload your communication somewhere.

[โ€“] Kissaki@feddit.org 2 points 3 weeks ago

I assume all chat apps would have to take measures, since generic data can be sent through them, including CSAM. Or could this quote be interpreted otherwise? I wonder what exactly is meant by voluntary then.

Maybe you have a messenger within your company. Only employees get access. There's no risks of minors being involved. If chats are private, account association would still be present if one of the two communicating reports issues or crimes. The threat model would be low in this case, and beyond being able to handle reported situations, you don't need to do anything.

Lemmy is free to sign up, for anyone, and has a messaging system. So you will have to think about and assess sign-up guardrails, requirements and verification, and consequential risks involved. Then you have to think about reporting and moderation, and how you can handle those. What kind of situations and risks are involved? What does due diligence in terms of preparation look like? In terms of monitoring and responding to it?

When weighing privacy against risks you may conclude "client-side scanning of images" is not warranted. Or you may deem it worth or necessary or you don't think much and just do it because you don't care but want to cover your bases, in terms of law or publicity. That's the "voluntary" part.

You can use it, or you can decide not to. As long as you assessed risks and and reasonably prepared for them.