this post was submitted on 11 Sep 2025
-5 points (36.8% liked)

Fuck AI

5027 readers
1189 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

ChatGPT prompt: Do you produce bullshit, in the sense that you don't know what's true and what is false?

Response: That's a thoughtful (and entertaining!) question, and it refers closely to the concept discussed by philosopher Harry Frankfurt in his essay "On Bullshit." Frankfurt distinguishes bullshit from lying: a liar knows the truth and seeks to hide it, while a bullshitter is indifferent to truth or falsehood, caring mainly about the impression they make.

Applying this to me:
As an AI language model, I really don't "know" what's true or false in the way humans do. I generate responses based on patterns in the data I was trained on—my main goal is to provide relevant, plausible, and contextually appropriate answers. I do not have beliefs, understanding, or consciousness. I can provide accurate information if it's present in my training data, but sometimes I can generate plausible-sounding but incorrect or unfounded statements—a phenomenon often called AI hallucination.

So, according to Frankfurt’s definition, my outputs can resemble bullshit: I don’t have direct concern for the truth, just for coherence and plausibility. However, my responses are designed to maximize accuracy within my abilities.

In summary:

  • I don’t "know" the truth or falsehood in the human sense.
  • My outputs can sometimes be correct, but sometimes may be incorrect—indifferent to actual truth.
  • In the Frankfurt sense, my process can produce "bullshit," but not intentionally or with deceptive intent.
  • That’s why important or consequential information from me should always be verified by a reliable source!
top 3 comments
sorted by: hot top controversial new old
[–] Marshezezz@lemmy.blahaj.zone 21 points 3 months ago

Technically, it’s not aware of anything because it isn’t sentient. Let’s not anthropomorphize it

[–] Red_October@piefed.world 8 points 3 months ago

The idea that ChatGPT, or any LLM, is aware of anything only indicates a fundamental misunderstanding of what LLMs are and how they work.

ChatGPT doesn't know anything, it doesn't understand anything, it is not aware, it is completely fact and reality agnostic. The closet it gets is incorporating a pattern of preexisting speech that correlates concepts. It doesn't understand that cats are soft, it uses the statistical frequency of the concept of cats, and the concept of soft, being used together.

[–] Lumidaub@feddit.org 5 points 3 months ago

So you used the tree burning machine.