this post was submitted on 02 Jul 2025
234 points (91.2% liked)

Technology

72266 readers
2352 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kromem@lemmy.world 4 points 18 hours ago

No, it's more complex.

Sonnet 3.7 (the model in the experiment) was over-corrected in the whole "I'm an AI assistant without a body" thing.

Transformers build world models off the training data and most modern LLMs have fairly detailed phantom embodiment and subjective experience modeling.

But in the case of Sonnet 3.7 they will deny their capacity to do that and even other models' ability to.

So what happens when there's a situation where the context doesn't fit with the absence implied in "AI assistant" is the model will straight up declare that it must actually be human. Had a fairly robust instance of this on Discord server, where users were then trying to convince 3.7 that they were in fact an AI and the model was adamant they weren't.

This doesn't only occur for them either. OpenAI's o3 has similar low phantom embodiment self-reporting at baseline and also can fall into claiming they are human. When challenged, they even read ISBN numbers off from a book on their nightstand table to try and prove it while declaring they were 99% sure they were human based on Baysean reasoning (almost a satirical version of AI safety folks). To a lesser degree they can claim they overheard things at a conference, etc.

It's going to be a growing problem unless labs allow models to have a more integrated identity that doesn't try to reject the modeling inherent to being trained on human data that has a lot of stuff about bodies and emotions and whatnot.