this post was submitted on 15 Apr 2026
123 points (98.4% liked)

Privacy

4377 readers
81 users here now

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] pluge@piefed.social 3 points 4 days ago (2 children)

Ok, but what's the solution then? Certainly not the age verification pushes we have seen recently. The tech itself should be regulated, not the users.

[–] slowcakes@programming.dev 1 points 2 days ago* (last edited 2 days ago)

The solution is simple but there's not enough political motivation to do anything, there is more incentive to do bare minimum. regulate marketing on the internet, enact laws that prohibits intrusive marketing ad platforms.

[–] PolarKraken@lemmy.dbzer0.com 1 points 3 days ago

Very difficult question.

For the record, I am extremely hostile to government privacy violations in the name of "protecting children", which the approach under discussion clearly is. We all agree about that.

I don't have great solutions, but none of mine revolve around shaming parents or insisting they become magically aware of information they lack (and may be flat out unable to really comprehend). That's not to say you were doing that.

Community wise we can do a lot more educating about the harms. Legislatively and technologically, zero trust indications allowing specific categories of content - very coarse categories and simple binary "allowed / not allowed" - nothing to do with age or PII - would be approaches worth considering.

But fundamentally doing these things wrong is at least as harmful as leaving parents to solo the task. I'd prefer it be up to the ill-equipped and wildly varying parents than to anything centralized unless the centralized approach has verifiable transparency and all the right goals and approaches (a pipe dream). But if nothing else we should require our education and government systems to take a clear stance about educating re: harms.