this post was submitted on 28 Jun 2025
916 points (94.8% liked)

Technology

72216 readers
3675 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

(page 3) 50 comments
sorted by: hot top controversial new old
[–] Geodad@lemmy.world 30 points 3 days ago (15 children)

I've never been fooled by their claims of it being intelligent.

Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

[–] anzo@programming.dev 14 points 3 days ago* (last edited 3 days ago)

I love this resource, https://thebullshitmachines.com/ (i.e. see lesson 1)..

In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.

You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. ..

Also, Anthropic (ironically) has some nice paper(s) about the limits of "reasoning" in AI.

load more comments (14 replies)
[–] psycho_driver@lemmy.world 14 points 3 days ago (1 children)

Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I'm paid in full for the six month period. It's been days now with no follow-up . . . I'm pretty sure AI snuck that one through for me.

[–] laranis@lemmy.zip 15 points 3 days ago (4 children)

Be careful... If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you'd see some money but at that point half of it goes to the lawyer and you're still screwed.

load more comments (4 replies)
[–] some_guy@lemmy.sdf.org 20 points 3 days ago (1 children)

People who don't like "AI" should check out the newsletter and / or podcast of Ed Zitron. He goes hard on the topic.

[–] kibiz0r@midwest.social 19 points 3 days ago* (last edited 3 days ago) (1 children)

Citation Needed (by Molly White) also frequently bashes AI.

I like her stuff because, no matter how you feel about crypto, AI, or other big tech, you can never fault her reporting. She steers clear of any subjective accusations or prognostication.

It’s all “ABC person claimed XYZ thing on such and such date, and then 24 hours later submitted a report to the FTC claiming the exact opposite. They later bought $5 million worth of Trumpcoin, and two weeks later the FTC announced they were dropping the lawsuit.”

load more comments (1 replies)
[–] RalphWolf@lemmy.world 22 points 3 days ago (9 children)

Steve Gibson on his podcast, Security Now!, recently suggested that we should call it "Simulated Intelligence". I tend to agree.

load more comments (9 replies)
[–] FreedomAdvocate@lemmy.net.au 1 points 1 day ago* (last edited 1 day ago) (1 children)

No shit. Doesn’t mean it still isn’t extremely useful and revolutionary.

“AI” is a tool to be used, nothing more.

load more comments (1 replies)
[–] mechoman444@lemmy.world 12 points 3 days ago* (last edited 3 days ago) (28 children)

In that case let's stop calling it ai, because it isn't and use it's correct abbreviation: llm.

load more comments
view more: ‹ prev next ›