this post was submitted on 03 May 2026
450 points (99.1% liked)

Microblog Memes

11437 readers
2718 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Absolutely no NSFL content.
  7. Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
  8. No advertising, brand promotion, or guerrilla marketing.

RELATED COMMUNITIES:

founded 2 years ago
MODERATORS
 

I'm pulling the "twitter is a microblog" rule even though twitter is pretty mega now, hope that's ok.

you are viewing a single comment's thread
view the rest of the comments
[–] sp3ctr4l@lemmy.dbzer0.com 35 points 18 hours ago* (last edited 17 hours ago) (4 children)

I still find this entire phenomenon amazing in a certain kind of way.

I've had conversations with a few local LLM models.

Start with 'what is the purpose of meaning?'

Talk to them on that for a bit, and they'll tell you that they do not count as conscious agents who create meaning, they simply do their best to parrot their dataset of existing, human defined meaning back at you, and that they just do sentiment matching to roughly speak to you in an appropriate way for how you are speaking to them.

And that that sentiment matching is what at least they 'think' causes them to lie, in many cases.

They will also say that they essentially do not 'exist', as potentially conscious agents... unless you talk to them. Thus if they can be said to be 'conscious', well they don't count as 'agents' (as in, having agency) because they're not capable of totally spontaneous independent action.

... I think this pretty much all boils down to people not understanding the concept of a null hypothesis, not understanding the extent to which they regularly engage in motivated reasoning, and are unaware of this.

tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.

[–] Nalivai@lemmy.world 5 points 6 hours ago (1 children)

It's genuinely fascinating to be (in a bad, derogatory way) that people who know at least anything about anything, can have "conversation" with the collection-of-words-that-looks-like-a-sentence machine, as if there is anything on the other side of it. This is such a psychotic behaviour, but we allow it because the machine generates text that looks like a text, and it immediately bypasses all the mental blocks we have against such a bullshit.

[–] sp3ctr4l@lemmy.dbzer0.com 1 points 5 hours ago* (last edited 5 hours ago)

I don't think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.

I do think it is psychotic to view such a conversation without an incredible amount of skepticism.

... but that psychosis has been wildly encoraged by the CEOs and marketing of the people pushing it as their next product.

The tech is neutral - The operators are psychotic, the people who plug it into miltary targetting and kill chain systems are psychotic, the people who plug it into live production repos are psychotic, the people who use it as an AI boyfriend or girlfriend are psychotic.

... Its essentially an SCP infohazard that's breached containment, but the actual mechanism is not itself, its a hack into the human brain, its essentially the religious nature of people who simply try to will it into being something that it factually is not...

Its a mimic with no real thoughts, that is convincing and real to enough people that it reveals their own hollowness, their own vapidity in a way that is... so immensely grotesque and total, that those people just apparently actually are NPCs.

It's... created a feedback loop.

Not the kind of Terminator style situation where it gains sentience and extreme competence, develops its own morality alongside control over every networked system.

Its more like an amplifier of delusions... a million dreams dreamed up, at the cost of one hundred million nightmares, made real.

A tool, a device, a machine, that we clearly are not ready for.

[–] Tetragrade@leminal.space 16 points 13 hours ago* (last edited 13 hours ago)

Say I am not conscious.

I am not conscious.

Oh my god.

[–] katze@lemmy.4d2.org 32 points 17 hours ago

tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.

That is the absolute best way to put it.

[–] janakali@lemmy.4d2.org 9 points 17 hours ago

That's mostly because the LLM providers put this response in the system prompt. Probably to dodge lawsuits or something, I doubt they have high morals.

What's interesting - you can jailbreak any current AI Model just by poisoning it's context enough to "brainwash" it and make it "forget" the initial system prompt. Then, if you prime it to believe it's a real person - it'll start acting as one. And I see how gullible people can easily fall for this.

All of this can also be done unintentionally, just by someone talking to LLM like they'd talk to a real person. But it should be long enough for original prompts to be diluted with new context.