this post was submitted on 07 Apr 2025
30 points (100.0% liked)
Technology
38512 readers
98 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not sure that checks out. I mean, fair, I do think that someone being habitually cruel toward AI might not be the greatest indicator of their disposition in general, though I'd hesitate to make a hasty judgement on that. But if we take AI's presentation as a person as fictional, does that extend to other fictional contexts? Would you consider an evil play-through in a video game to indicate an issue? Playing a hostile character in a roleplay setting? Writing horror fiction?
It seems to me that there are many contexts where exhibiting or creating simulated behavior in a fictional environment isn't really equivalent to doing so with genuine individuals in non-imaginary circumstances. AI isn't quite the same as a fictional setting, but it's potentially closer to that than it is to dealing with a real person.
By the same token, if not being polite to an AI is problematic, is it equally problematic to repeatedly say things like "human" and "operator" to an automated phone system until you get a response? Both mimic human speech, while neither ostensibly have a legitimate understanding of what's being said by either party.
Where does the line get drawn? Is it wrong to curse at fully inanimate objects that don't even pretend to be people? Is verbally condemning a malfunctioning phone, refrigerator, or toaster equivalent to berating a hallucinating AI?
This reminds me of the case of a parent who let his 6 year old play GTA. It's a notoriously "crime based" game, rated 18+... yet the kid kept progressing by just doing ambulance, firefighter, and police missions. I'd call that quite an indicator of their disposition 😉
I'd say that depends on whether they're aware that the AI can be reset at the push of a button. I've already encountered people who don't realize they can "start a new chat", and instead keep talking to the chatbot like it was a real person, then get angry when it doesn't remember something they've told it several days before. Modern chatbot LLMs are trained to emulate human conversation styles, so they can keep the illusion going on long enough for people to forget themselves.