this post was submitted on 07 Apr 2025
30 points (100.0% liked)
Technology
38565 readers
194 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My thinking is that LLMs are human-like enough that mistreating them can be a strong indicator of someone’s character. If you’re comfortable being cruel to something that closely resembles a person, it suggests you might treat actual people poorly too. That’s why I think the premise of the TV series Westworld wouldn’t really work in real life - you’d have to be a literal psychopath to mistreat those human-like robots, even if you know (or are pretty sure) they’re not conscious.
I don’t think people need to go out of their way to be overly polite to an LLM - we can be pretty confident it doesn’t actually care - but if I saw someone’s chat history and it was nothing but them being mean or abusive, that would be a massive red flag for me personally.
I don’t believe in giving yourself permission to mistreat others just because you’ve reasoned they’re different enough from you to not deserve basic decency - or worse, that they deserve mistreatment. Whatever excuse you use to “other” someone is still just that - an excuse. Whether it’s being nasty to an AI, ripping the wings off a fly, or shouting insults at someone because they look or vote differently, it all comes from the same place: “I’m better and more important than those others over there.” Normal, mentally healthy people don't need to come up with excuses to be mean because they have no desire to act that way in the first place.
I'm not sure that checks out. I mean, fair, I do think that someone being habitually cruel toward AI might not be the greatest indicator of their disposition in general, though I'd hesitate to make a hasty judgement on that. But if we take AI's presentation as a person as fictional, does that extend to other fictional contexts? Would you consider an evil play-through in a video game to indicate an issue? Playing a hostile character in a roleplay setting? Writing horror fiction?
It seems to me that there are many contexts where exhibiting or creating simulated behavior in a fictional environment isn't really equivalent to doing so with genuine individuals in non-imaginary circumstances. AI isn't quite the same as a fictional setting, but it's potentially closer to that than it is to dealing with a real person.
By the same token, if not being polite to an AI is problematic, is it equally problematic to repeatedly say things like "human" and "operator" to an automated phone system until you get a response? Both mimic human speech, while neither ostensibly have a legitimate understanding of what's being said by either party.
Where does the line get drawn? Is it wrong to curse at fully inanimate objects that don't even pretend to be people? Is verbally condemning a malfunctioning phone, refrigerator, or toaster equivalent to berating a hallucinating AI?
This reminds me of the case of a parent who let his 6 year old play GTA. It's a notoriously "crime based" game, rated 18+... yet the kid kept progressing by just doing ambulance, firefighter, and police missions. I'd call that quite an indicator of their disposition 😉
I'd say that depends on whether they're aware that the AI can be reset at the push of a button. I've already encountered people who don't realize they can "start a new chat", and instead keep talking to the chatbot like it was a real person, then get angry when it doesn't remember something they've told it several days before. Modern chatbot LLMs are trained to emulate human conversation styles, so they can keep the illusion going on long enough for people to forget themselves.