this post was submitted on 03 May 2026
440 points (99.3% liked)

Microblog Memes

11437 readers
2717 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Absolutely no NSFL content.
  7. Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
  8. No advertising, brand promotion, or guerrilla marketing.

RELATED COMMUNITIES:

founded 2 years ago
MODERATORS
 

I'm pulling the "twitter is a microblog" rule even though twitter is pretty mega now, hope that's ok.

top 50 comments
sorted by: hot top controversial new old

I really don't understand this mental deficiency. I have tried texting with a few llms including cluade. It just lies constantly. Gaslights about it's lies then congratulates you when you continue to call it for out for lying. I've never felt like i was speaking to anything with actual intelligence. It's a word calculator and it's extremely obvious to anyone who's interacted with actual people in the last 20 years. I truly feel bad for the masses that are going to fall for this push for "ai" friends. We need to bring back ridiculing friends and family that engage with these choise your own adventure muppets.

[–] davidagain@lemmy.world 18 points 12 hours ago

Go back to the evolutionary biology, Dawkins. You're outside your expertise and it's showing.

[–] RizzRustbolt@lemmy.world 8 points 11 hours ago

ELIZA is alive and well.

Weizenbaum is probably laughing it up in Fólkvangr.

[–] sp3ctr4l@lemmy.dbzer0.com 33 points 16 hours ago* (last edited 14 hours ago) (4 children)

I still find this entire phenomenon amazing in a certain kind of way.

I've had conversations with a few local LLM models.

Start with 'what is the purpose of meaning?'

Talk to them on that for a bit, and they'll tell you that they do not count as conscious agents who create meaning, they simply do their best to parrot their dataset of existing, human defined meaning back at you, and that they just do sentiment matching to roughly speak to you in an appropriate way for how you are speaking to them.

And that that sentiment matching is what at least they 'think' causes them to lie, in many cases.

They will also say that they essentially do not 'exist', as potentially conscious agents... unless you talk to them. Thus if they can be said to be 'conscious', well they don't count as 'agents' (as in, having agency) because they're not capable of totally spontaneous independent action.

... I think this pretty much all boils down to people not understanding the concept of a null hypothesis, not understanding the extent to which they regularly engage in motivated reasoning, and are unaware of this.

tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.

[–] Nalivai@lemmy.world 2 points 4 hours ago (1 children)

It's genuinely fascinating to be (in a bad, derogatory way) that people who know at least anything about anything, can have "conversation" with the collection-of-words-that-looks-like-a-sentence machine, as if there is anything on the other side of it. This is such a psychotic behaviour, but we allow it because the machine generates text that looks like a text, and it immediately bypasses all the mental blocks we have against such a bullshit.

[–] sp3ctr4l@lemmy.dbzer0.com 1 points 2 hours ago* (last edited 2 hours ago)

I don't think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.

I do think it is psychotic to view such a conversation without an incredible amount of skepticism.

... but that psychosis has been wildly encoraged by the CEOs and marketing of the people pushing it as their next product.

The tech is neutral - The operators are psychotic, the people who plug it into miltary targetting and kill chain systems are psychotic, the people who plug it into live production repos are psychotic, the people who use it as an AI boyfriend or girlfriend are psychotic.

... Its essentially an SCP infohazard that's breached containment, but the actual mechanism is not itself, its a hack into the human brain, its essentially the religious nature of people who simply try to will it into being something that it factually is not...

Its a mimic with no real thoughts, that is convincing and real to enough people that it reveals their own hollowness, their own vapidity in a way that is... so immensely grotesque and total, that those people just apparently actually are NPCs.

It's... created a feedback loop.

Not the kind of Terminator style situation where it gains sentience and extreme competence, develops its own morality alongside control over every networked system.

Its more like an amplifier of delusions... a million dreams dreamed up, at the cost of one hundred million nightmares, made real.

A tool, a device, a machine, that we clearly are not ready for.

[–] Tetragrade@leminal.space 16 points 11 hours ago* (last edited 11 hours ago)

Say I am not conscious.

I am not conscious.

Oh my god.

[–] katze@lemmy.4d2.org 31 points 14 hours ago

tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.

That is the absolute best way to put it.

[–] janakali@lemmy.4d2.org 8 points 15 hours ago

That's mostly because the LLM providers put this response in the system prompt. Probably to dodge lawsuits or something, I doubt they have high morals.

What's interesting - you can jailbreak any current AI Model just by poisoning it's context enough to "brainwash" it and make it "forget" the initial system prompt. Then, if you prime it to believe it's a real person - it'll start acting as one. And I see how gullible people can easily fall for this.

All of this can also be done unintentionally, just by someone talking to LLM like they'd talk to a real person. But it should be long enough for original prompts to be diluted with new context.

[–] andros_rex@lemmy.world 35 points 16 hours ago (1 children)

Fuck Richard Dawkins. He’s always been a shitbag, and the Files confirmed it.

According to DOJ-released documents indexed by Epstein Exposed, Richard Dawkins appears in 433 case documents, and 15 email records in the Epstein files.

British evolutionary biologist and author, emeritus fellow of New College, Oxford. Flew on Epstein's private jet in 2002 with Steven Pinker, Daniel Dennett, and John Brockman to TED in Monterey, California. Connected through John Brockman's Edge Foundation, which Epstein bankrolled. Mentioned 71 times across 40 Epstein documents, mostly referencing his scientific work.

How the fuck do you pal with child rapists and pedophiles and have the absolute fucking gall to write that stupid “Dear Muslima” comment. How do you fly on the Lolita Express and thing you have any moral weight on Elevator Gate? We don’t know that he put his own dick in kids, but we know his friends did. Fuck Pinker too.

[–] thesmokingman@programming.dev 2 points 2 hours ago

I’m just gonna copy what I put in another comment to highlight why Dawkins thinks “Claudia” is conscious

Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .

Could a being capable of perpetrating such a thought really be unconscious?

[–] RememberTheApollo_@lemmy.world 21 points 15 hours ago

AI/LLMs are the modern equivalent of the house or business with “Psychic” and “Tarot Reading” signs out front.

The proprietor isn’t going to tell you any hard truths or make you feel bad, that’s bad for business and you won’t come back. They want you to come back and stay engaged.

Whatever they tell you is going to be what they think you want to hear based on skills picked up over the years - the equivalent of LLMs petabytes of scraped and stolen knowledge used to predict what comes next.

What they tell you has a high likelihood of being wrong, or just general enough that you can’t actually act on it.

[–] FinjaminPoach@lemmy.world 147 points 1 day ago (1 children)
[–] yeahiknow3@lemmy.dbzer0.com 32 points 20 hours ago* (last edited 4 hours ago) (5 children)

Unironically, I am on the fence about whether a lot of folks are genuinely conscious. Their morality is so twisted I don’t believe it.

[–] wonderingwanderer@sopuli.xyz 1 points 5 hours ago

I used to theorize that some people lacked self-awareness, which I defined as the primary characteristic of a conscious entity. People thought I was being pretentious.

[–] Einskjaldi@lemmy.world 20 points 16 hours ago (1 children)

Frank Herbert would say no to people that never reached past concrete thought and didn't hit abstract thought and just live their life with animal instincts and never critically self examine what they do and think.

[–] Sanctus@anarchist.nexus 2 points 3 hours ago

Theres a thing called hylics, its a gnostic concept I think. Animal souls. They can never achieve gnosis because they can't introspect basically.

[–] JennyLaFae@lemmy.blahaj.zone 8 points 15 hours ago

In my experience, the majority of people are simply reacting to outside stimulation, then reasoning and justifying their actions after the fact.

[–] Jtotheb@lemmy.world 9 points 17 hours ago

It’s interesting for certain. I will end up in a discussion with down-with-the-government coworkers who twist themselves into knots to align themselves with pre-approved Republican stances. What do you mean you don’t care about birth gender markers causing passport issues for trans people, how are you okay with the concept of paying for a chance at a passport in the first place when you think licenses and car inspections are overreach and restrict your right to travel? But I think today’s work-life balance and in particular the employer standard of ‘owning your time’ that occurred in the Industrial Revolution calls for a certain level of turning off your brain.

Who knows though. There’s a lot of archaeological and anthropological evidence that shows people in prehistoric times did a lot of thinking on their morality, on governance, on how society should be formed. But it’s harder to quantify how many of them were tuned in and how many were just going through the motions like modern times.

load more comments (1 replies)
[–] Th4tGuyII@fedia.io 47 points 21 hours ago (8 children)

The whole reason they seem this way is because they're designed by us to be very competent mimics of us.

LLMs/GenAI are absolutely not conscious. They're just a really advanced game of word association, which cab lead them to say absolutely anything in response to the right prompts.

If there ever truly is a day where we knowingly created an actual conscious AGI, I suspect it would be locked up tighter than fort knox by whichever country's military found it first - not interfaced onto the internet to answer questions.

[–] CheeseNoodle@lemmy.world 22 points 18 hours ago (1 children)

I still don't understand how it can seem this way, and the fact that so many people seem to think so feels like a massive failure of the education system to instill the most basic of critical thinking skills. Once every month or two I check in to see if an LLM can achieve a half decent 1 on 1 D&D game and it always falls horribly flat within the first minute or two.

[–] khannie@lemmy.world 9 points 14 hours ago

Once every month or two I check in to see if an LLM can achieve a half decent 1 on 1 D&D game and it always falls horribly flat within the first minute or two.

That's a really clever test. I love it.

load more comments (7 replies)
[–] turdas@suppo.fi 50 points 23 hours ago* (last edited 23 hours ago) (28 children)

The actual article isn't nearly as stupid as the tweet makes it seem. I recommend giving it a read. It's behind a shitty paywall, but if you use Firefox's reader mode (Ctrl-Alt-R, or the little papper icon to the right side of the address bar) as soon as the page loads, you can read it.

His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren't conscious, then perhaps consciousness isn't as important as we thought it was:

Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers.

Some people will surely contest his claim that LLMs are as competent as evolved organisms. There's definitely a bit of AI boomerism at play here (we have benchmarks that show just how incompetent LLMs can be), but I don't think that invalidates his point, because LLMs can be very competent in the domains they're trained to be competent in -- they just aren't AGI.

[–] thesmokingman@programming.dev 2 points 2 hours ago

Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .

Could a being capable of perpetrating such a thought really be unconscious?

Oh it’s actually stupider than the tweet makes it seem.

My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

Competency should imply the ability complete a lengthy task (eg hunting, building a nest, writing a paper). LLMs can’t.

[–] Nalivai@lemmy.world 2 points 4 hours ago

LLMs are able to do things we previously thought only conscious beings would be capable of doing

"We" as in lay misunderstanding of some pop science, still don't get what consciousness is and can't describe it. There are people alive today who didn't believe in their youth that black people are fully conscious, Dawkins demonstrated by his communication to his personal friend and hero Epstein, that he doesn't fully believes that women are conscious. What we thought or didn't think of previously can't be a good indication of anything.

[–] SkaveRat@discuss.tchncs.de 56 points 23 hours ago* (last edited 23 hours ago) (12 children)

Man, those conversations are eye roll inducing

I like the shift away from "are they conscious" towards "what's a way to define consciousness?"

Because that's the actual important question. And literally nobody can answer it. Any discussion is more philosophy than hard science

The most interesting part is the last paragraph

Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?

[–] Godwins_Law@lemmy.ca 10 points 18 hours ago (3 children)

Blindsight by Peter Watts is a great sci Fi novel about consciousness

[–] topherclay@lemmy.world 2 points 4 hours ago

That novel also does a shout-out to Richard Dawkins despite being set in the distant future because it was written in 2006.

load more comments (2 replies)
load more comments (11 replies)
load more comments (25 replies)
[–] Grail@multiverse.soulism.net 5 points 15 hours ago (6 children)

Have y'all ever noticed that belief in p-zombies has increased massively in the past few years?

load more comments (6 replies)
load more comments
view more: next ›