this post was submitted on 22 May 2025
75 points (100.0% liked)

TechTakes

1873 readers
253 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] self@awful.systems 14 points 9 hours ago (1 children)

It’s the alignment problem.

no it isn’t

They made an intelligent robot

no they didn’t

You can’t control the paperclip maximiser with a “no killing” rule!

you’re either a lost Rationalist or you’re just regurgitating critihype you got from one of the shitheads doing AI grifting

[–] dragonfucker@lemmy.nz -4 points 8 hours ago (2 children)

Rationalism is a bad epistemology because the human brain isn't a logical machine and is basically made entirely out of cognitive biases. Empiricism is more reliable.

Generative AI is environmentally unsustainable and will destroy humanity not through war or mind control, but through pollution.

[–] froztbyte@awful.systems 7 points 8 hours ago (1 children)

wow, you’re really speedrunning these arcade games, you must want that golden ticket real bad

[–] swlabr@awful.systems 4 points 4 hours ago

IDK if they were really speedrunning, it took 3 replies for the total mask drop.

[–] self@awful.systems 7 points 8 hours ago (1 children)

sure but why are you spewing Rationalist dogma then? do you not know the origins of this AI alignment, paperclip maximizer bullshit?

[–] dragonfucker@lemmy.nz -5 points 8 hours ago* (last edited 8 hours ago) (1 children)

Drag is a big fan of Universal Paperclips. Great game. Here's a more serious bit of content on the Alignment Problem from a source drag trusts: https://youtu.be/IB1OvoCNnWY

Right now we have LLMs getting into abusive romantic relationships with teenagers and driving them to suicide, because the AI doesn't know what abusive behaviour looks like. Because it doesn't know how to think critically and assign a moral value to anything. That's a problem. Safe AIs need to be capable of moral reasoning, especially about their own actions. LLMs are bullshit machines because they don't know how to judge anything for factual or moral value.

[–] froztbyte@awful.systems 9 points 8 hours ago (2 children)

the fundamental problem with your posts (and the pov you’re posting them from) is the framing of the issue as though there is any kind of mind, of cognition, of entity, in any of these fucking systems

it’s an unproven one, and it’s not one you’ll find any kind of support for here

it’s also the very mechanism that the proponents of bullshit like “ai alignment” use to push the narrative, and how they turn folks like yourself into free-labour amplifiers

[–] corbin@awful.systems 2 points 7 hours ago (1 children)

To be fair, I'm skeptical of the idea that humans have minds or perform cognition outside of what's known to neuroscience. We could stand to be less chauvinist and exceptionalist about humanity. Chatbots suck but that doesn't mean humans are good.

[–] froztbyte@awful.systems 6 points 7 hours ago

mayhaps, but then it's also to be said that people who act like the phrase was "cogito ergo dim sum" also don't exactly aim for a high bar