this post was submitted on 28 Feb 2025
20 points (100.0% liked)

Technologie - 🤖

752 readers
10 users here now

Ici concerne le champs de domaine large de la technologie : actualités sur l'informatique, partage de programme informatique et de code, montrer vos projets Arduino, ect.

Mégafil ici

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 2 months ago

According to GrapheneOS, a security-oriented Android Open Source Project (AOSP)-based distro: "The app doesn't provide client-side scanning used to report things to Google or anyone else. It provides on-device machine-learning models that are usable by applications to classify content as spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users."

So it does not scan your photos, it's a library that can be used by apps. I think the point is to allow apps to reject or tag a picture without ever sending it to a server for scanning, thus taking off the load of these servers and putting it on the client-side.

Like, you're trying to post a story on Instagram, the app asks Safetycore if the picture contains porn/violence/something they don't want, Safetycore says yes/no, and Instagram accepts and tag or refuses the picture accordingly.

The danger here IMO is less about privacy and more about censoring : we know every time something is pushed to fight child porn, it ends up used to control activists and political opponents. People may be restricted to share proofs of police violence for example.

No problem if you're using Lemmy, you can use any front-end, so you can use an app that won't use SafetyCore.