this post was submitted on 02 Jul 2025
232 points (91.1% liked)
Technology
72266 readers
2537 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Actually laughed out loud.
That this happened around April Fools' makes me think that someone forgot to instruct it not to partake in any activities associated with that date. The fact it chose The Simpsons' address in its (feigned?) confusion is a dead giveaway (to me) that it was trying to be funny.
Or rather, imitating people being funny without any understanding of how to do that properly.
Its explanation afterwards reads like a poor imitation of someone pretending to not know that there was a joke going on.
No, it's more complex.
Sonnet 3.7 (the model in the experiment) was over-corrected in the whole "I'm an AI assistant without a body" thing.
Transformers build world models off the training data and most modern LLMs have fairly detailed phantom embodiment and subjective experience modeling.
But in the case of Sonnet 3.7 they will deny their capacity to do that and even other models' ability to.
So what happens when there's a situation where the context doesn't fit with the absence implied in "AI assistant" is the model will straight up declare that it must actually be human. Had a fairly robust instance of this on Discord server, where users were then trying to convince 3.7 that they were in fact an AI and the model was adamant they weren't.
This doesn't only occur for them either. OpenAI's o3 has similar low phantom embodiment self-reporting at baseline and also can fall into claiming they are human. When challenged, they even read ISBN numbers off from a book on their nightstand table to try and prove it while declaring they were 99% sure they were human based on Baysean reasoning (almost a satirical version of AI safety folks). To a lesser degree they can claim they overheard things at a conference, etc.
It's going to be a growing problem unless labs allow models to have a more integrated identity that doesn't try to reject the modeling inherent to being trained on human data that has a lot of stuff about bodies and emotions and whatnot.
Every. Goddamn. Time.
People will say to vegans, pet owners etc: “DON’T HUMANISE ANIMALS”. Then, some tech bro feeds them an inflated Markov Chain statistical nonsense chat bot and they go all “ZOMG IT IS CONSCIOUS ITS ALIVE WARHARGHLBLB”