this post was submitted on 15 Apr 2026
123 points (98.4% liked)

Privacy

4377 readers
114 users here now

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] PolarKraken@lemmy.dbzer0.com 4 points 4 days ago (1 children)

This commenter is pointing out that - definitionally - most parents lack what they need to mount an effective defense or even understand one is needed, because of how the deck is stacked. It isn't random uninvolved people making the tech addictive and harmful, (contrasting with parents as a group) - it's roughly the people best on the planet at making those things damaging, who are doing so.

Commenter is not inviting government overreach, but lamenting that every parent is being asked to defend against this most pernicious force, and it's unrealistic to expect them to succeed. As we clearly see, they don't succeed, they lose! State of mental development for kids in the US for example is in absolute shambles.

Doesn't seem very controversial at all, kind of just an obvious observation tbh.

[–] pluge@piefed.social 3 points 4 days ago (2 children)

Ok, but what's the solution then? Certainly not the age verification pushes we have seen recently. The tech itself should be regulated, not the users.

[–] slowcakes@programming.dev 1 points 2 days ago* (last edited 2 days ago)

The solution is simple but there's not enough political motivation to do anything, there is more incentive to do bare minimum. regulate marketing on the internet, enact laws that prohibits intrusive marketing ad platforms.

[–] PolarKraken@lemmy.dbzer0.com 1 points 3 days ago

Very difficult question.

For the record, I am extremely hostile to government privacy violations in the name of "protecting children", which the approach under discussion clearly is. We all agree about that.

I don't have great solutions, but none of mine revolve around shaming parents or insisting they become magically aware of information they lack (and may be flat out unable to really comprehend). That's not to say you were doing that.

Community wise we can do a lot more educating about the harms. Legislatively and technologically, zero trust indications allowing specific categories of content - very coarse categories and simple binary "allowed / not allowed" - nothing to do with age or PII - would be approaches worth considering.

But fundamentally doing these things wrong is at least as harmful as leaving parents to solo the task. I'd prefer it be up to the ill-equipped and wildly varying parents than to anything centralized unless the centralized approach has verifiable transparency and all the right goals and approaches (a pipe dream). But if nothing else we should require our education and government systems to take a clear stance about educating re: harms.