this post was submitted on 07 Apr 2025
546 points (95.7% liked)

tumblr

4129 readers
149 users here now

Welcome to /c/tumblr, a place for all your tumblr screenshots and news.

Our Rules:

  1. Keep it civil. We're all people here. Be respectful to one another.

  2. No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.

  3. Must be tumblr related. This one is kind of a given.

  4. Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.

  5. No unnecessary negativity. Just because you don't like a thing doesn't mean that you need to spend the entire comment section complaining about said thing. Just downvote and move on.


Sister Communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 2 weeks ago

Hmm very interesting info, thanks. Research about biases and poisoning is very important, but why would you assume this can't be overcome in the future? Training advanced AI models specifically to understand the reasons behind biases and be able to filter or mark them.

So my hope is that it IS technically possible to develop an AI model that can both reason better and analyze news sources, journalists, their affiliations, their motivation and historical actions, and can be tested or audited against bias (in the simplest case a kind of litmus test). And to use that instead of something like google and integrated in the browser (like firefox) to inform users about the propaganda around topics and in articles. I don't see anything that precludes this possibility or this goal.

The other thing is that we can't expect a top down approach to work, but the tools need to be "democratic". And an advanced, open source, somewhat audited AI model against bias and manipulation could be run locally on your own solar powered PC. I don't know how much it costs to take something like deepseek and train a new model on updated datasets, but it can't be astronomical. It only takes at least one somewhat trustworthy project to do this. That is a much more a bottom up approach.

Those who have and seek power have no interest in limiting misinformation. The response to the misinformation by Trump and MAGA seems to have led to more pressure on media conglomerates to be in lockstep and censor anything that is dissent (the propaganda model). So expecting those in power to make that a priority is futile. Those who only seek power are statistically more likely to achieve it, and they will and are using AI against us already.

Of course I don't have all the answers, and my argument could be put stupidely as "The only thing that can stop a bad AI with a gun is a good AI with a gun". But I see "democratizing" AI as a crucial step.