this post was submitted on 03 May 2026
450 points (99.1% liked)

Microblog Memes

11437 readers
2718 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Absolutely no NSFL content.
  7. Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
  8. No advertising, brand promotion, or guerrilla marketing.

RELATED COMMUNITIES:

founded 2 years ago
MODERATORS
 

I'm pulling the "twitter is a microblog" rule even though twitter is pretty mega now, hope that's ok.

you are viewing a single comment's thread
view the rest of the comments
[–] turdas@suppo.fi 51 points 1 day ago* (last edited 1 day ago) (6 children)

The actual article isn't nearly as stupid as the tweet makes it seem. I recommend giving it a read. It's behind a shitty paywall, but if you use Firefox's reader mode (Ctrl-Alt-R, or the little papper icon to the right side of the address bar) as soon as the page loads, you can read it.

His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren't conscious, then perhaps consciousness isn't as important as we thought it was:

Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers.

Some people will surely contest his claim that LLMs are as competent as evolved organisms. There's definitely a bit of AI boomerism at play here (we have benchmarks that show just how incompetent LLMs can be), but I don't think that invalidates his point, because LLMs can be very competent in the domains they're trained to be competent in -- they just aren't AGI.

[–] thesmokingman@programming.dev 3 points 4 hours ago (1 children)

Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .

Could a being capable of perpetrating such a thought really be unconscious?

Oh it’s actually stupider than the tweet makes it seem.

My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

Competency should imply the ability complete a lengthy task (eg hunting, building a nest, writing a paper). LLMs can’t.

[–] turdas@suppo.fi 1 points 59 minutes ago* (last edited 58 minutes ago) (1 children)

It's hardly surprising that a model optimized for replacing StackOverflow couldn't survive in the untamed wilderness. As for writing a paper... you must've missed the fact that academia is currently in a crisis precisely because LLMs are better at writing papers than most students.

By the way, the paper the blog post you link to as a source links to as a source benchmarked LLMs on graph diagrams, textile patterns and 3D objects. It is not news that the language model would do poorly on visual-heavy tasks.

[–] thesmokingman@programming.dev 1 points 14 minutes ago (1 children)

Sorry, I assumed you would have actually read the DELEGATE-52 study linked instead of just the abstract. For “a model optimized for replacing StackOverflow” that is “better at writing papers than most students” LLMs sure did pretty bad at those tasks over multiple rounds.

[–] turdas@suppo.fi 1 points 4 minutes ago

As the chart on page 7 of the paper shows, LLMs are good at exactly the kind of tasks you'd expect (producing and manipulating language), and bad at exactly the kind of tasks you'd expect (doing almost anything else). All this paper shows is that (1) they aren't AGI, and (2) as a consequence of not being AGI they aren't good unsupervised.

Why do you lie like this?

[–] Nalivai@lemmy.world 3 points 6 hours ago (1 children)

LLMs are able to do things we previously thought only conscious beings would be capable of doing

"We" as in lay misunderstanding of some pop science, still don't get what consciousness is and can't describe it. There are people alive today who didn't believe in their youth that black people are fully conscious, Dawkins demonstrated by his communication to his personal friend and hero Epstein, that he doesn't fully believes that women are conscious. What we thought or didn't think of previously can't be a good indication of anything.

[–] turdas@suppo.fi 2 points 1 hour ago

"We" as in anyone who put any weight in the Turing test used to think that passing it would be some indication of consciousness, but now that LLMs can handily pass it it's evident it either isn't evidence of consciousness or that LLMs are conscious.

[–] SkaveRat@discuss.tchncs.de 56 points 1 day ago* (last edited 1 day ago) (3 children)

Man, those conversations are eye roll inducing

I like the shift away from "are they conscious" towards "what's a way to define consciousness?"

Because that's the actual important question. And literally nobody can answer it. Any discussion is more philosophy than hard science

The most interesting part is the last paragraph

Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?

[–] Godwins_Law@lemmy.ca 10 points 21 hours ago (2 children)

Blindsight by Peter Watts is a great sci Fi novel about consciousness

[–] topherclay@lemmy.world 2 points 7 hours ago

That novel also does a shout-out to Richard Dawkins despite being set in the distant future because it was written in 2006.

[–] SkaveRat@discuss.tchncs.de 2 points 18 hours ago (1 children)

it's on my to-read list.

Right now listening to Children Of Strife. Whose series is also quite deep into conciousness and sapience

[–] khannie@lemmy.world 2 points 15 hours ago* (last edited 15 hours ago)

I have that but haven't started it yet. The second in the series is one of my all time favourites.

"We're going on an adventure"

[–] pennomi@lemmy.world 17 points 1 day ago (2 children)

It’s very difficult to define, isn’t it?

If I were to give it a shot, I’d say that consciousness is akin to proprioception - the ability to know the state of oneself and understand how actions taken will change that state. It has very little to do with intelligence, just the “sense of being”.

Or maybe in other words, object persistence (but for yourself) is all it takes in my opinion. Even the simplest of animals could be considered conscious by this definition.

[–] queerlilhayseed@piefed.blahaj.zone 18 points 1 day ago (2 children)

I think, when we finally do have a generally-accepted definition of consciousness, we will be deeply unsettled by how simple it is. How unprofound. Like a magic trick after you know how it works. And I think it will require us to think hard about what to do with animals and software that have it.

[–] trem@lemmy.blahaj.zone 20 points 1 day ago (1 children)

I feel like that's exactly why we don't have a generally-accepted definition of consciousness. Western ethics assigns special protection to whatever is conscious, so it is convenient to come up with a definition of consciousness, which excludes groups you want to exploit.

Tale as old as time, or at least the conscious idea of time. Whatever consciousness is, we are it. Those humans over there though? Who's to say they aren't sub-humans? Isn't it our job to enlighten them and also take their land and food and things and selves?

[–] turdas@suppo.fi 5 points 21 hours ago (2 children)

Personally I'm in the "consciousness is an illusion and every time you go to bed a different person wakes up in the morning" camp.

[–] Jaycifer@piefed.social 8 points 16 hours ago (1 children)

I would consider this to be two separate, semi-related concepts asserted together, one that consciousness is an illusion, and one that you are a different person each day.

The first point draws many questions; consciousness is an illusion of what? What mechanism causes the illusion? How does it cause it? Why does the illusion exist? And you may note that you could replace illusion in those questions with consciousness and be left in a similar (though still distinct) place. So simply calling consciousness an illusion seems to me to kick the can down the road without actually addressing the problem.

As for being a different person after a lapse in awareness, I’d like to take it a step further and say that you could be considered a new person with every change in moment. It’s easy enough to look back 10 years and say “yeah, that’s a younger me, but they’re not the same as me I can just see the path that led to where I am now.” Getting closer, you may feel different today compared to yesterday depending on various factors (sleep, diet, events), but are you a different person because you slept and had a lapse of awareness, or because the state of your mind and thoughts have shifted? When your internal monologue (or equivalent thought) asks “what is this guy talking about?” Is it not thinking “what” in a brand new context given the words it is responding to, forming a new beginning to a thought that puts the mind in a unique state primed to then enter a new state of “is?” And if the mind is in a unique state of novelty, could the person attached to the mind be considered distinct from the person that existed before?

There is a reason the word revelation exists, it indicates when a person has a novel thought that changes their perspective or way of thinking, altering who they are. Would they not be a new person despite being aware of the process of their change? Due to the above points I don’t think new personhood only occurs at sleep, but constantly. The rate of change may quicken or slow, but the change is always there.

[–] turdas@suppo.fi 5 points 16 hours ago

By consciousness being an illusion I mean that we place great value on the uninterrupted continuation of our consciousness, but I think it's likely that it (exactly as you suggest) only really exists in the moment. The illusion would then be the illusion that consciousness is uninterrupted, when in reality you're almost constantly recreating yourself from memory.

This would, incidentally, make us concerningly similar to current AI models.

Of course I have no way of actually knowing any of this. It's just what I'm betting on, because otherwise I think it's really hard to explain any unconsciousness (be it sleep, general anesthesia, suspended animation or the Star Trek transporter) as anything short of death. My belief "solves" this problem by rejecting the whole premise of uninterrupted consciousness.

[–] naught101@lemmy.world 1 points 16 hours ago

That won't get the IRS off your back, unfortunately

[–] FinjaminPoach@lemmy.world 4 points 18 hours ago* (last edited 18 hours ago) (1 children)

Thank you for the comment, i feel silly for not linking the article when people will probably want to read it.

My thoughts:

His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren’t conscious, then perhaps consciousness isn’t as important as we thought it was

Seems like an "evil" and dangerous talking point. To me, the value of consciousness isn't in ita evolutionary efficiency.

My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism.

I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.

[–] turdas@suppo.fi 5 points 17 hours ago (1 children)

Seems like an "evil" and dangerous talking point. To me, the value of consciousness isn't in ita evolutionary efficiency.

It's not a question of the value of consciousness, it's a question of its necessity. If an unconscious "zombie" can be, to an external observer, indistinguishable from a conscious being, then that means we've been overestimating the importance of consciousness for intelligence. Like Dawkins says in the article, there could be unconscious aliens out there who are nonetheless as intelligent as (or more intelligent than) humans. This isn't a new concept -- it's been explored many times in scifi -- but AI is now bringing the question from the realm of philosophy to the real world.

I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.

This is less true than it ever was with reasoning models. Some of the latest reasoning models don't necessarily even reason in English anymore but rather an eclectic mix of languages. The next step after that is probably going to be running the reasoning in latent space (see e.g. Coconut), which basically means the model skips the language generation layer altogether and feeds lower-level state back into itself. Basically it is getting closer and closer to what most humans consider "thinking".

But even besides reasoning models, I believe LLMs aren't as different from human language production as many people think. The human speech centre, in a way, also just selects the right combination of data to continue a conversation. It frequently even hallucinates (we call this "speaking before thinking") and makes stupid mistakes (we provoke these with trick questions like those on the Cognitive Reflection Test). There's also some fascinating experiments in people who have had the connection between their brain hemispheres severed that really suggest our speech centre is just making things up as it goes along.

[–] 5too@lemmy.world 2 points 5 hours ago

This is one of the things that fascinates about LLMs - they seem like a part of how our brains work, without the internal self-referential parts

[–] Einskjaldi@lemmy.world 2 points 18 hours ago

There's enough that it would be difficult to tell an actual sentient Ai from chatbot just by words.

[–] FaceDeer@fedia.io 3 points 1 day ago (2 children)

As LLMs have developed and have been able to cram more and more "thoughtlike" behaviour into smaller RAM and less computation, I've steadily become less impressed with human brains. It seems like the bits we think most highly of are probably just minor add-ons to stuff that's otherwise dedicated to running our big complicated bodies in a big complicated physics environment. If all you want to have is the part that philosophizes and solves abstract problems and whatnot then you may not actually need all that much horsepower.

I'm thinking consciousness might also turn out to be something pretty simple. Assuming consciousness is even a particular "thing" in the first place and not just a side effect of being able to predict how other people will behave.

[–] XLE@piefed.social 1 points 8 hours ago* (last edited 8 hours ago)

I've steadily become less impressed with human brains.

You need to lay off the AI if it's making you this weirdly misanthropic.

This is how tech bros justify causing harm: they genuinely don't care, because they think of the un-"enlightened" as less worthy of existing

[–] yeahiknow3@lemmy.dbzer0.com 10 points 22 hours ago* (last edited 22 hours ago) (4 children)

Brains aren’t impressive because of their compute (which is both immense and absurdly efficient) or their ability to predict the future (technically the main function of evolved minds). They’re impressive because they’re conscious. The fact that organic brains can also engage in hierarchical abstraction, which no digital computer (or Turing machine) can do by definition, is icing on the cake.

(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computing is responsible for consciousness.)

[–] psycotica0@lemmy.ca 3 points 18 hours ago (1 children)

You're going to have to do a lot more to justify the leap from Godel's Incompleteness and the Halting Problem to "digital is limited, analog is not", because neither of those things have anything to do with digital processes at all, and in fact both came about before we'd invented digital computers.

To me this comment sounds like when popsci gets ahold of a few sciency words and suddenly decides everything is crystal vibrations universal harmonics string theory quantum tunneling aligning resonance with those around you.

[–] yeahiknow3@lemmy.dbzer0.com 2 points 8 hours ago* (last edited 7 hours ago)

The situation is the following.

  1. Brains are analog computers, which are digitally irreducible.
  2. There are stringent limitations on Turing machines (digital computers),
  3. We can’t extract semantics from syntax, and so…

We’ll probably need analog computation, currently in its infancy, to get artificial (inorganic) consciousness.

I study metaethics and philosophy of mathematics. These problems are real, and I am being honest with you.

[–] SkaveRat@discuss.tchncs.de 2 points 18 hours ago (1 children)

(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computing is responsible for consciousness.)

I hear that argument from time to time, and I never found a source for it. I want to understand the original claim. Because it doesn't make any sense when people bring it up. because both theorems do not have anything to do with the areas it's applied to. I understand why people think it does, but it just doesn't

[–] yeahiknow3@lemmy.dbzer0.com 2 points 8 hours ago

The simplest way to understand this problem is as follows.

  1. Analog computation is not digitally reducible. (Brains are analog computers.)

  2. Turing’s infamous Halting Problem.

I can write more about this and point you to more technical discussions if you want.

[–] turdas@suppo.fi 2 points 20 hours ago (1 children)

I don't see why there would be any fundamental difference between analog and digital computing. Digital computers can emulate analog computing, and I doubt consciousness arises from having theoretically infinite decimal precision, because in practice analog systems cannot use infinite precision either. Analogs (heh!) of the halting problem and the theorems you mention also exist for analog computing.

Quantum effects in the brain are a slightly more plausible explanation for consciousness, but currently they teeter on magical thinking because we don't really know anything about what they would actually do in the brain. It becomes an "a wizard did it" explanation.

So in the end, we just don't know.

[–] yeahiknow3@lemmy.dbzer0.com 1 points 8 hours ago (1 children)

I don't see why there would be any fundamental difference between analog and digital computing.

Then why not take a course on Theoretical Computer Science? Or do you not care about the differences?

[–] turdas@suppo.fi 1 points 7 hours ago (2 children)

I have a master's degree in computer science.

Obviously I meant "I don't see why there would be any fundamental difference between analog and digital computing [when it comes to consciousness]."

[–] yeahiknow3@lemmy.dbzer0.com 1 points 7 hours ago* (last edited 4 hours ago)

The consciousness thing… I would be delighted if we could get a digital system to be conscious. Three reasons it’s probably impossible.

  1. We would need to figure out how to collapse semantics into syntax, since digital systems are purely syntactic and consciousness deals with semantics.
  2. The only examples of conscious systems we have are analog and heavily substrate-dependent — so, making neurons out of any artificial material breaks their functionality.
  3. As Gödel said, “the mind is incapable of mechanizing all of its intuitions.” The first incompleteness theorem means that no computational procedure could exist to determine whether propositions are valid, provable, or even equivalent, and that no matter how you formulate the number-theoretic axioms, a human mathematician would always have insights (for instance, about whether a Diophantine equation has a solution) that are both clearly “true” and obviously unprovable.

It looks like digital systems are too constrained.

Add the Chinese room thought experiment into the mix and it really becomes impossible to see how a Turing machine (by itself, without analog components) could ever be conscious.

[–] FaceDeer@fedia.io 1 points 22 hours ago (1 children)

I'm still awaiting a widely accepted method of actually measuring "consciousness." It's a conveniently nebulous property.

And simply defining it as something computers can't do is even more convenient.

[–] yeahiknow3@lemmy.dbzer0.com 3 points 22 hours ago* (last edited 22 hours ago) (1 children)

That doesn’t change the fact that I am conscious.

Also, I never said computers can’t be conscious. I said that digital computers (Turing machines) probably can’t. Quantum and analog computers have no such theoretical constraints and they’re far, far more prevalent given that they’re found in every living creature.

[–] FaceDeer@fedia.io 1 points 22 hours ago (1 children)

Sure, you say you're conscious. I can get an LLM to say it's conscious too. This is why we need some method for measuring it. Otherwise how can I tell which of you is telling the truth?

[–] yeahiknow3@lemmy.dbzer0.com 4 points 22 hours ago* (last edited 22 hours ago) (1 children)

This is called the problem of other minds. Of course I can’t be certain about the consciousness of others. I can only be certain about my own.

We do have a way of measuring the correlates of consciousness. But we have no clue how to detect the presence of subjective experience using quantitative methods.

Philosophy departments (which is where any discovery on this front will originate) are heavily defunded. If you’re waiting for physicists or biologists to figure this out you’ll be waiting even longer.

[–] FaceDeer@fedia.io 0 points 22 hours ago (1 children)

Exactly, which is why it's IMO a bit presumptuous to say with confidence that humans are conscious while LLMs are categorically not conscious. We don't even really know what that means.

I don't personally think LLMs are conscious, at least not yet or not to the same degree that humans are. But that's purely based on vibe, it's not something I can know. We need to figure out what consciousness really is and how to measure it before we can say we know this with any certainty.

[–] yeahiknow3@lemmy.dbzer0.com 2 points 22 hours ago* (last edited 8 hours ago) (1 children)

It is not presumptuous at all. Inference to the best explanation is how you know (almost) anything.

  1. This table isn’t conscious.

This is my justified belief. No inferential claim is guaranteed and all objective claims are inferential (which is why scientific claims aren’t absolute).

That said, I have strong reasons to think that tables aren’t conscious. They might be, but I’m epistemically compelled to believe otherwise.

  1. ChatGPT isn’t conscious.

Ditto. It would be irrational for me to believe otherwise given the strong evidence.

That you “don’t know for sure” is an implied disclaimer for every scientific claim.

If the evidence is ambiguous, we say so. Regarding ChatGPT, the evidence is unambiguous.

  1. I am conscious.

This is a non-inferential claim that I know through direct contact with reality. It is a priori.

[–] Micromot@piefed.social 1 points 21 hours ago* (last edited 21 hours ago) (1 children)

This is pretty much what Descartes meant with "cogito ergo sum". The only thing you can be sure are 100% real, are your thoughts

[–] psycotica0@lemmy.ca 1 points 18 hours ago (1 children)

Right, your own thoughts. So I can be sure I'm conscious, but you commenting "I know I'm conscious" on here doesn't tell me anything about your consciousness. The robot can do that, and does.

[–] Micromot@piefed.social 1 points 17 hours ago

This is just the stuff you do in philosophy class. There is no right answer really. You can never be sure of something being conscious or even be sure that it exists in reality. We can just react to what we perceive.