Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
[email protected]
[email protected]
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @[email protected].
view the rest of the comments
No, no it won't
Care to elaborate about why you think so?
AI researchers often don't understand basic aspects of how biases work in a cultural sense, like they get lost in the technical power and specifications of the machines they make and completely ignore the fact that they are working with a problem where you cannot eliminate bias or subjectivity in your filters, you can only be lucid and clear about what they are and try to minimize them in every way you can. Basically you get a bunch of people who think they are REALLY smart reinventing something badly that a whole category of experts have spent decades studying and grappling with.
This kind of narrative, and reasoning is VERY VERY VERY hard to stop once it gets momentum and can lead to a quick degradation of civil rights in a society, especially for younger people.
AI is crap and it is always always always always always going to be worse than putting actual human beings who are professionals into positions where they can stop cyber grooming, bullying or harassment.
Honestly, the fact that people are looking to AI to solve a problem like this inherently shows how little people actually give a fuck about solving this kind of problem shrugs . If kids are experiencing rampant toxic shit online, they need more adults to spend time with them and talk to them who they can trust, they don't need more computer automated crap surveilling everything they fucking do and randomly dragnetting people with algorithms that are constantly wrong. The problem is that society has deemed children not worth the time for adults to spend quality time with enough that kids would be able to discuss and share these things easier, or that kids only have a handful of adults in their life they can actually trust and they just don't feel comfortable talking to any of that handful.
I consider an article like this, in so many words, a society admitting it is divesting in its children and just trying to figure out how to do that in a cost effective way. (not taking a dig at Norway specifically here, I am from the fucking US after all)