this post was submitted on 07 Apr 2025
543 points (95.6% liked)

tumblr

4102 readers
7 users here now

Welcome to /c/tumblr, a place for all your tumblr screenshots and news.

Our Rules:

  1. Keep it civil. We're all people here. Be respectful to one another.

  2. No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.

  3. Must be tumblr related. This one is kind of a given.

  4. Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.

  5. No unnecessary negativity. Just because you don't like a thing doesn't mean that you need to spend the entire comment section complaining about said thing. Just downvote and move on.


Sister Communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 113 points 1 week ago (7 children)

The amount of times I've seen a question answered by "I asked chatgpt and blah blah blah" and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea

[–] [email protected] 5 points 6 days ago (1 children)
[–] [email protected] 4 points 5 days ago

That was a great read! Thanks for padding my RSS feed ever so more!

[–] [email protected] 0 points 5 days ago (1 children)

Yeah, don't use a hallucinogenic machine for truth about the universe. That is just asking for trouble.

Use it to give you new ideas. Be creative together. It works exceptionally well for that.

[–] [email protected] 1 points 5 days ago

be creative.

nature already has a solution for that and they're called drugs

[–] [email protected] -5 points 6 days ago (1 children)

We're in a post truth world where most web searches about important topics give you bullshit answers. But LLMs have read basically all the articles already and has at least the potential make deductions and associations about it - like this belongs to "propaganda network 4335". Or "the source of this claim is someone who has engaged in deception before". Something like a complex fact check machine.

This is sci-fi currently because it's an ocean wide but can't think deeply or analyze well, but if you press GPT about something it can give you different "perspectives". The next generations might become more useful in this in filtering out fake propaganda. So you might get answers that are sourced and referenced and which can also reference or dispute wrong answers / talking points and their motivation. And possibly what emotional manipulation and logical fallacies they use to deceive you.

[–] [email protected] 2 points 5 days ago (1 children)

Hey MuskAI, is this verifiable fact about Elon's corruption true?

No, that's fake news. Here's a few conspiracy blogs that prove it. Buy more Trump Coin 💰🇺🇸

[–] [email protected] -1 points 5 days ago (1 children)
[–] [email protected] 2 points 5 days ago (1 children)

Respectfully, you have no clue what you're talking about if you don't recognize that case as the exception and not the rule.

Many of these early generation LLMs are built from the same model or trained on the same poorly curated datasets. They're not yet built for pushing tailored propaganda.

It's trivial to bake bias into a model or put guardrails up. Look at deepseek's lock down on any sensitive Chinese politics. You don't even have to be that heavy handed, just poison the training data with a bunch of fascist sources.

[–] [email protected] 1 points 5 days ago (1 children)

You are arguing there is a possibility it will go that way, while I was talking about a possibility of a more advanced AI that is open source, has verifiable arguments with sources. While the negative outcome is very important, you're practically dog-piling me to suppress a possible positive outcome.

RIGHT NOW even without AI the vast majority of people are simply unable to perceive reality on certain important topics. Because of propaganda, polarization, profit seeking through clickbait, and other effects. You can't trust, and you can't verify because you ain't got the time.

My argument is that a more advanced and open source AI could provide reliable information because it has the capability to filter and analyze a vast ocean of data.

My argument is that this potential capability might be crucial to escape the current (non AI) misinformation epidemic. What you are arguing is not an argument against what I'm arguing.

[–] [email protected] 1 points 5 days ago (1 children)

I apologize if my phrasing is combative; I have experience with this topic and get a knee-jerk reaction to supporting AI as a literacy tool.

Your argument is flawed because it implicitly assumes that critical thinking can be offloaded to a tool. One of my favorite quotes on that:

The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place.

(coincidentally from an article on the topic of LLM use for propoganda)

You can't "open source" a model in a meaningful and verifiable way. Datasets are massive and, even if you had the compute to audit them, poisoning can be much more subtle than explicitly trashing the dataset.

For example, did you know you can control bias just by changing the ordering of the dataset? There's an interesting article from the same author that covers well known poisoning vectors, and that's already a few years old.

These problems are baked in to any AI at this scale, regardless of implementation. The idea that we can invent a way out of a misinformation hell of our own design is a mirage. The solution will always be to limit exposure and make media literacy a priority.

[–] [email protected] 1 points 5 days ago

Hmm very interesting info, thanks. Research about biases and poisoning is very important, but why would you assume this can't be overcome in the future? Training advanced AI models specifically to understand the reasons behind biases and be able to filter or mark them.

So my hope is that it IS technically possible to develop an AI model that can both reason better and analyze news sources, journalists, their affiliations, their motivation and historical actions, and can be tested or audited against bias (in the simplest case a kind of litmus test). And to use that instead of something like google and integrated in the browser (like firefox) to inform users about the propaganda around topics and in articles. I don't see anything that precludes this possibility or this goal.

The other thing is that we can't expect a top down approach to work, but the tools need to be "democratic". And an advanced, open source, somewhat audited AI model against bias and manipulation could be run locally on your own solar powered PC. I don't know how much it costs to take something like deepseek and train a new model on updated datasets, but it can't be astronomical. It only takes at least one somewhat trustworthy project to do this. That is a much more a bottom up approach.

Those who have and seek power have no interest in limiting misinformation. The response to the misinformation by Trump and MAGA seems to have led to more pressure on media conglomerates to be in lockstep and censor anything that is dissent (the propaganda model). So expecting those in power to make that a priority is futile. Those who only seek power are statistically more likely to achieve it, and they will and are using AI against us already.

Of course I don't have all the answers, and my argument could be put stupidely as "The only thing that can stop a bad AI with a gun is a good AI with a gun". But I see "democratizing" AI as a crucial step.

[–] [email protected] 52 points 1 week ago (1 children)

This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.

[–] [email protected] 2 points 5 days ago

And facts don't care about your LLM's feelings

[–] [email protected] 17 points 1 week ago (1 children)
[–] [email protected] 4 points 6 days ago

Hey, I may be stupid and lazy, but at least I don't, uh, what were we talking about?

[–] [email protected] 16 points 1 week ago (1 children)

A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.

[–] [email protected] 7 points 1 week ago (1 children)

Why not just read the first part of a wikipedia article if they want that though? It's not the end all source but it'd better than asking the machine known to make things up the same question.

[–] [email protected] 14 points 1 week ago

Because the AI propaganda machine is not exactly advertising the limitations, and the general public sees LLMs as a beefed up search engine. You and I know that’s laughable, but they don’t. And OpenAI sure doesn’t want to educate people - that would cost them revenue.

[–] [email protected] 5 points 1 week ago

I don't see the point either if you're just going to copy verbatim. OP could always just ask AI themselves if that's what they wanted.