this post was submitted on 31 Mar 2025
-2 points (25.0% liked)

Technology

2376 readers
181 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @[email protected].

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 3 days ago (2 children)
[–] [email protected] 1 points 3 days ago (1 children)

Care to elaborate about why you think so?

[–] [email protected] 2 points 3 days ago* (last edited 3 days ago)
  1. AI researchers often don't understand basic aspects of how biases work in a cultural sense, like they get lost in the technical power and specifications of the machines they make and completely ignore the fact that they are working with a problem where you cannot eliminate bias or subjectivity in your filters, you can only be lucid and clear about what they are and try to minimize them in every way you can. Basically you get a bunch of people who think they are REALLY smart reinventing something badly that a whole category of experts have spent decades studying and grappling with.

  2. This kind of narrative, and reasoning is VERY VERY VERY hard to stop once it gets momentum and can lead to a quick degradation of civil rights in a society, especially for younger people.

  3. AI is crap and it is always always always always always going to be worse than putting actual human beings who are professionals into positions where they can stop cyber grooming, bullying or harassment.

  4. Honestly, the fact that people are looking to AI to solve a problem like this inherently shows how little people actually give a fuck about solving this kind of problem shrugs . If kids are experiencing rampant toxic shit online, they need more adults to spend time with them and talk to them who they can trust, they don't need more computer automated crap surveilling everything they fucking do and randomly dragnetting people with algorithms that are constantly wrong. The problem is that society has deemed children not worth the time for adults to spend quality time with enough that kids would be able to discuss and share these things easier, or that kids only have a handful of adults in their life they can actually trust and they just don't feel comfortable talking to any of that handful.

I consider an article like this, in so many words, a society admitting it is divesting in its children and just trying to figure out how to do that in a cost effective way. (not taking a dig at Norway specifically here, I am from the fucking US after all)