this post was submitted on 22 May 2025
75 points (100.0% liked)

TechTakes

1873 readers
251 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dragonfucker@lemmy.nz -4 points 7 hours ago* (last edited 7 hours ago) (6 children)

Drag is a big fan of Universal Paperclips. Great game. Here's a more serious bit of content on the Alignment Problem from a source drag trusts: https://youtu.be/IB1OvoCNnWY

Right now we have LLMs getting into abusive romantic relationships with teenagers and driving them to suicide, because the AI doesn't know what abusive behaviour looks like. Because it doesn't know how to think critically and assign a moral value to anything. That's a problem. Safe AIs need to be capable of moral reasoning, especially about their own actions. LLMs are bullshit machines because they don't know how to judge anything for factual or moral value.

[–] froztbyte@awful.systems 7 points 7 hours ago (5 children)

the fundamental problem with your posts (and the pov you’re posting them from) is the framing of the issue as though there is any kind of mind, of cognition, of entity, in any of these fucking systems

it’s an unproven one, and it’s not one you’ll find any kind of support for here

it’s also the very mechanism that the proponents of bullshit like “ai alignment” use to push the narrative, and how they turn folks like yourself into free-labour amplifiers

load more comments (3 replies)
load more comments (3 replies)