this post was submitted on 02 Jul 2025
39 points (95.3% liked)

World News

48099 readers
1913 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

“If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

top 14 comments
sorted by: hot top controversial new old
[–] BeigeAgenda@lemmy.ca 3 points 1 day ago

Isn't it too easy for the current chatbots/LLMs to lie about everything?

Train it on garbage or in the wrong way, and it will agree on anything you want it to.

I asked DeepSeek about what to visit nearby and to give me some URLs and it hallucinated the URLs and places. Guess it wasn't trained to know anything about my local area.

[–] venusaur@lemmy.world 4 points 1 day ago (2 children)

There should be a series of AI agents in place when a GPT is used. The agents intake the query and review the output before sending it off to the user.

[–] vrighter@discuss.tchncs.de 4 points 1 day ago (2 children)

what makes the checker models any more accurate?

[–] venusaur@lemmy.world 2 points 1 day ago (1 children)

The checker models aren’t trying to give you a correct answer with confidence. Their purpose is to find an incorrect answer. They’ll both do their task with confidence.

[–] vrighter@discuss.tchncs.de 1 points 1 day ago (1 children)

the first one was confident. But wrong. The second one could be just as confident and just as wrong.

[–] venusaur@lemmy.world 2 points 20 hours ago (1 children)

Sure but they’re doing opposite tasks. You’re absolutely right that they could be wrong sometimes. So are people. Over time it gets better, especially with more regulation and smarter models.

[–] vrighter@discuss.tchncs.de 1 points 13 hours ago (1 children)

opposite or not, they are both tasks that the fixed-matrix-multiplications can utterly fail at. It's not a regulation thing. It's a math thing: this cannot possibly work.

If you could get the checker to be correct all of the time, then you could just do that on the model it's "checking" because it is literally the same thing, with the same failure modes, and the same lack of any real authority in anything it spits

[–] venusaur@lemmy.world 2 points 7 hours ago* (last edited 7 hours ago)

That’s not how it works though. It would be great if these AI models were deterministic but you can get different answers to the same questions at any given time. Given different input and given different goals, the agents wouldn’t likely fail on the same task when given proper instruction.

The main point is that it’s not going to be correct all the time. And neither is a human.

The regulation comes in when you’re dealing with sensitive information, like health diagnoses. There needs to be some logic in place to stop the models from being so confident with wrong answers that could hurt people.

Realistically, neither of us know what’s gonna work until we try it. Theoretically, verification agents would work.

[–] perestroika@slrpnk.net 2 points 1 day ago (1 children)

Possibly, reverse motivation - the training goal of such an agent would not be nice and smooth output, but shooting down misinformation.

But I have serious doubts about whether all of that is feasible, given the computational cost of running large language models.

[–] vrighter@discuss.tchncs.de 2 points 1 day ago

how does that stop the checker model from "hallucinating" a "yep, this is fine" when it should have said "nah, this is wrong"

[–] madlian@lemmy.cafe 2 points 1 day ago (2 children)

Who verifies the AI agent decisions?

[–] venusaur@lemmy.world 1 points 1 day ago

The user. You could have the output include the “conversation” between the agents and validate the decisions. Not perfect, but better. People aren’t perfect either.

[–] truxnell@aussie.zone 2 points 1 day ago (1 children)
[–] brendansimms@lemmy.world 3 points 1 day ago

its just ai agents all the way down