this post was submitted on 05 Jun 2025
946 points (98.9% liked)
Not The Onion
16576 readers
1064 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that
If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.
ChatGPT isn't anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it's not hard.
So if you wrote an article about how "gpt said this" or "gpt said that" you better include the full context or I'll assume you are 100% bullshit
They link directly to the journal article in the third sentence and the full pdf is available right there. How is that not tantamount to including the full context?
https://arxiv.org/pdf/2411.02306
Cool
The paper clearly is about how a specific form of training on a model causes the outcome.
The article is actively disinformation then, it frames it as a user and not a scientific experiment, and it says it was Facebook llama model, but it wasn't.
It was a further altered model of llama that was further trained to do this
So, as I said, utter garbage journalism.
The actual title should be "Scientific study shows training a model based off user feedback can produce dangerous results"
I don't see how this is much different from the sycophancy "error" OpenAI built into its machine to drive user retention.
If a meth user is looking for reasons to keep using, then a yes-man AI system biased toward agreeing with them will give them reasons.
Honestly, it's much scarier than meth addiction; you could reasonably argue the meth user should pull up their bootstraps and simply refuse to use the sycophantic AI.
But what about flat-earthers? What about Q-Anon? These are not people looking for the treatment of their mental illness, and a sycophantic AI will tell them "You're on the right track. It's freedom fighters like you this country needs. NASA doesn't want people to know about this."
You're not wrong but also there's a ton of misinformation out there, both due to bad journalism and also pro-LLM advocates, that is selling the idea that LLMs are actually real AI that is able to think and reason and is operating within ethical boundaries of some kind.
Neither of those things are true but that's what a lot of available information about LLMs would have you believe so it's not difficult to imagine someone engaging with a chatbot ending up with a similar result without trying to force it explicitly via prompt engineering.