this post was submitted on 18 Oct 2025
112 points (97.5% liked)

Futurology

3429 readers
74 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] justOnePersistentKbinPlease@fedia.io 33 points 1 month ago (1 children)

LLMs are a dead end to AGI. They do not reason or understand in any way. They only mimic it.

It is the same technology now as 20 years ago with the first chatbots. Just LLMs have models approaching a Trillion items instead of a few thousand.

[–] Perspectivist@feddit.uk 5 points 1 month ago (1 children)

I haven't said a word about LLMs.

[–] justOnePersistentKbinPlease@fedia.io 5 points 1 month ago (1 children)

They are the closest things to AI that we have. The so called LRMs fake their reasoning.

They do not think or reason. We are at the very best decades away from anything resembling an AI.

The best LLMs can do is a mass effect(1) VI and that is still more than a decade away

[–] Perspectivist@feddit.uk 2 points 1 month ago (1 children)

The chess opponent on Atari is AI - we’ve had AI systems for decades.

An asteroid impact being decades away doesn’t make it any less concerning. My worries about AGI aren’t about the timescale, but about its inevitability.

[–] Sconrad122@lemmy.world 1 points 1 month ago

Decades is plenty of time for society to experience a collapse or major setback that prevents AGI from being discovered in the lifetime of any currently alive human. Whether that comes from war, famine, or natural phenomena induced by man-made climate change, we have plenty of opportunities as a species to take the offramp and never "discover" AGI. This comment is brought to you by optimistic existentialism