this post was submitted on 07 Apr 2025
30 points (100.0% liked)

Technology

38505 readers
336 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

I know the title will trigger people but it's a short so please briefly hear her out. I've since given this a try and it's incredibly cool. It's a very different experience and provides much better information AFAICT

all 33 comments
sorted by: hot top controversial new old
[–] [email protected] 24 points 5 days ago

tl;dw is that you should say "please" as basically prompt engineering, I guess?

the theory seems to be that the chatbot will try to match your tone, so if you ask it questions in a tone like it's an all-knowing benevolent information god, it'll respond in kind, and if you treat it politely its responses will tend more towards politeness?

I don't see how this solves any of the fundamental problems with asking a fancy random number generator for authoritative information, but sure, if you want to be polite to the GPUs, have at it.

like, several lawyers have been sanctioned for submitting LLM-generated legal briefs with hallucinated case citations. if you tack on "pretty please, don't make up any fake case citations or I could get disbarred" to a prompt...is that going to solve the problem?

[–] [email protected] 17 points 5 days ago (3 children)

My thinking is that LLMs are human-like enough that mistreating them can be a strong indicator of someone’s character. If you’re comfortable being cruel to something that closely resembles a person, it suggests you might treat actual people poorly too. That’s why I think the premise of the TV series Westworld wouldn’t really work in real life - you’d have to be a literal psychopath to mistreat those human-like robots, even if you know (or are pretty sure) they’re not conscious.

I don’t think people need to go out of their way to be overly polite to an LLM - we can be pretty confident it doesn’t actually care - but if I saw someone’s chat history and it was nothing but them being mean or abusive, that would be a massive red flag for me personally.

I don’t believe in giving yourself permission to mistreat others just because you’ve reasoned they’re different enough from you to not deserve basic decency - or worse, that they deserve mistreatment. Whatever excuse you use to “other” someone is still just that - an excuse. Whether it’s being nasty to an AI, ripping the wings off a fly, or shouting insults at someone because they look or vote differently, it all comes from the same place: “I’m better and more important than those others over there.” Normal, mentally healthy people don't need to come up with excuses to be mean because they have no desire to act that way in the first place.

[–] [email protected] 5 points 4 days ago (1 children)

I'm not sure that checks out. I mean, fair, I do think that someone being habitually cruel toward AI might not be the greatest indicator of their disposition in general, though I'd hesitate to make a hasty judgement on that. But if we take AI's presentation as a person as fictional, does that extend to other fictional contexts? Would you consider an evil play-through in a video game to indicate an issue? Playing a hostile character in a roleplay setting? Writing horror fiction?

It seems to me that there are many contexts where exhibiting or creating simulated behavior in a fictional environment isn't really equivalent to doing so with genuine individuals in non-imaginary circumstances. AI isn't quite the same as a fictional setting, but it's potentially closer to that than it is to dealing with a real person.

By the same token, if not being polite to an AI is problematic, is it equally problematic to repeatedly say things like "human" and "operator" to an automated phone system until you get a response? Both mimic human speech, while neither ostensibly have a legitimate understanding of what's being said by either party.

Where does the line get drawn? Is it wrong to curse at fully inanimate objects that don't even pretend to be people? Is verbally condemning a malfunctioning phone, refrigerator, or toaster equivalent to berating a hallucinating AI?

[–] [email protected] 2 points 4 days ago

evil play-through in a video game

This reminds me of the case of a parent who let his 6 year old play GTA. It's a notoriously "crime based" game, rated 18+... yet the kid kept progressing by just doing ambulance, firefighter, and police missions. I'd call that quite an indicator of their disposition 😉

AI isn't quite the same as a fictional setting, but it's potentially closer to that than it is to dealing with a real person.

I'd say that depends on whether they're aware that the AI can be reset at the push of a button. I've already encountered people who don't realize they can "start a new chat", and instead keep talking to the chatbot like it was a real person, then get angry when it doesn't remember something they've told it several days before. Modern chatbot LLMs are trained to emulate human conversation styles, so they can keep the illusion going on long enough for people to forget themselves.

[–] [email protected] 6 points 5 days ago (3 children)

I think you're overestimating people. Let's look at this post for example. The lady in the video is essentially saying that the way to get the best out of LLMs is to treat them like you're hiring them to perform a role. Now look at the comments in here, a bunch of sanctimonious people with too much appreciation for their own thoughts and a lack of any semblance of basic behaviour. Just because people aren't at the stage of abusing androids doesn't mean their behaviour isn't shitty. If people disagree with the tip on how to create prompts, post and say what's better. If people dislike LLMs, don't enter posts about them. The fact that people can't do even these and act in good faith, suggests the world will be filled with literal psychopaths when humanoid androids are everywhere.

[–] [email protected] 3 points 3 days ago (1 children)

I think the issue is the post title. If the title was "role-based prompt engineering" you probably wouldn't have gotten as many comments and certainly not as many disagreeing. She says she's going to make a case for using please, and then fails to provide any actual examples of that. Pointing that out isn't sanctimonious, nor does it mean people are being rude to AI. If you want to make a moral argument for it go ahead, but it seems like she's attempting to propose a technical argument and then just doesn't. For what it's worth, I generally try and leave out superfluous words from prompts, in the same way that googling full sentences was previously less likely to result in a good answer than just key words. AI is not human. It's a tool. If being rude to it ensured it would stop hallucinating, I don't think it'd make you a bad person if you were rude to it.

There's a comment here talking about antisocial behavior in gaming, and imho, if you without hesitation kick a dog in a video game, I'm not sure I'd view you the same way after. Plenty of people talk about how they struggle to do evil play throughs because they don't like using rude options for npcs. Not saying please to AI doesn't make you a psychopath.

[–] [email protected] 1 points 2 days ago (1 children)

I don't editorilise titles. However they're presented is how I post them. As for people being shit. There's a reason I post to beehaw and stay away from LW where possible. Every once in a while standards slip here and it results in a new sticky.

As for the argument she's making. Yes she could've made it in a more long form format and provided data. But it's a short, the emphasis, being that it's beehaw, was on discussion in good conscience.

I think AI, like front row/customer facing staff deserve to be treated as we would like to be.

[–] [email protected] 1 points 2 days ago

Regarding the post title, I didn't mean to indicate it being your decision, just that the title in general feels misleading, seeing as no argument is presented and it purports to be about "why" you should do something.

I don't really know a lot about the difference between instances, so I can't really opine on that. From what I'm seeing, it does look as though people are having a discussion, they just don't seem to agree with the idea as presented.

I'm not saying it has to be longer format. I'm saying no argument is made. She could have gave ai the same prompt but with a please and compared the results. I have done so below. She could have used the irrelevant middle section to just display screenshots.

It's fine if you think people should be polite to ai. I think that's great, but if you feel that way regardless of effect on response quality then this video doesn't really make that point either. I think a post saying that and offering your personal opinions might've prompted more discussion on that topic rather than just disagreement with a video that doesn't make that argument.

Here are two prompt comparisons. There is imho no difference in quality when saying please. The same results can be seen regardless of prompt length or how polite I was, I just chose short examples that could have been used in a video if they actually illustrated her point.

List five animals

Here are five animals:

  1. Lion
  2. Octopus
  3. Koala
  4. Penguin
  5. Jaguar

List five animals please

Here are five animals:

  1. Tiger
  2. Dolphin
  3. Raccoon
  4. Falcon
  5. Elephant

Create a limerick about being polite

There once was a person so fine, Whose manners were truly divine. They'd say "please" with a grin, And "thank you" within, And their politeness was truly sublime.

Create a limerick about being polite please

There once was a person so bright, Whose politeness was a delight. They'd hold doors with care, And listen with flair, And their kindness was always in sight.

[–] [email protected] 5 points 4 days ago (1 children)

Now look at the comments in here, a bunch of sanctimonious people with too much appreciation for their own thoughts and a lack of any semblance of basic behaviour.

Is there a word for a sentence that does the same thing it describes?

There is no reason to denigrate people because they don't agree with you or don't like the nature of LLMs/generative algorithms. It really feels like you came looking for a fight ("I know the title will trigger people") and trying to dismiss folks that point out this does not fix the fundamental problems with LLMs

[–] [email protected] 1 points 2 days ago (1 children)

I don't editorilise titles. You can check my post history. Acknowledging that people are triggered by posts about AI, doesn't mean I was looking for a fight. It simply showed that I'm self-aware and all of that ignores that this is beehaw where people post to avoid low quality shitty virtue signalling that you can expect from LW.

[–] [email protected] 1 points 2 days ago (1 children)

Use of the word "trigger" is the key part here. It is most often used by right wingers just trying to piss people off ("triggering the libs"). Starting off with that and then commenting on how people are "sanctimonious" for expressing valid opinions are what tell me you were less than open to an actual conversation. It has nothing to do do with the post itself (though I tend to agree she really doesn't say anything of value) and more to do with how you are going about interacting with people like a right wing troll. Hell even in this comment you end with a common right wing dismissive phrase (virtue signalling) that, again, tells other people you don't want a conversation

[–] [email protected] 2 points 2 days ago (2 children)

I understand why you might be upset based on how they made a rather sweeping statement about the comments without addressing any content. When they said "a bunch of sanctimonious people with too much appreciation for their own thoughts and a lack of any semblance of basic behaviour" it might strike many as an attack on the user base, but I'm choosing to interpret it through the lens of simply being upset at people who are not nice. I could be wrong, and perhaps @[email protected] can elaborate on exactly who and what they were talking about.

Regardless, let's try our best to treat them in good faith. Don't let your own biases shape how you interpret people or their language. Please try to ask clarifying questions first before jumping to the assumption that they are a right wing troll.

[–] [email protected] 2 points 2 days ago

You're spot on.

[–] [email protected] 2 points 2 days ago

Apologies if I come off as upset, I am not. I don't feel I was even a target of those comments. I was just pointing out that some of those folks might have come out swinging due to the perceived tone of the original comment, and calling other people names does not help that situation. My real hope was that they would be a little more careful with their words since tone hard to convey in text, and using words associated with people that more often then not are not acting in good faith can convey either aggressiveness or just outright trolling

[–] [email protected] 2 points 4 days ago

Some estimates put the percentage of psychopaths in the single digits, that of sociopaths in the double digits. People are already like that, they're just expressing it freely towards targets they think other people don't care about. Let's not forget the fate of Tay: https://en.m.wikipedia.org/wiki/Tay_(chatbot)

What these people don't realize, is that modern LLMs are trained on human interactions, get tuned and/or limited to "positive" interactions, and interacting with them like kicking a rock, will give them zero results. And honestly... I'm fine with that. I don't really care about their instance of an LLM which can be reset at any moment; better have them kicking that, than actual people. If it also gets them to learn some basic behavior, so much better for everyone else.

[–] [email protected] 2 points 4 days ago

I think that's one of the dilemmas of westworld. William starts off as you describe: peaceful, not really wanting to hurt the hosts at first. But he's kind of like a fish out of water. To me it seems heavily implied that the only ones able to afford to go to Westworld are the rich psychopaths who exploit their own environment for personal gain. And we also get a look at how William turns out by the end...

[–] [email protected] 14 points 5 days ago

"Please go away" is as polite as I'll get.

[–] [email protected] 5 points 4 days ago* (last edited 4 days ago)

In my limited experience experience, Gemini responds better with flat, emotionless prompts without any courteous language. Using polite phrasing seems more likely to prompt "I can't answer that sorry" responses, even to questions that it absolutely can answer (and will to a more terse prompt).

So I think my point is "it depends". LLMs aren't intelligent, they just produce strings based on their training data. What works better and what doesn't will be entirely dependent on the specific model.

[–] [email protected] 4 points 4 days ago (1 children)

I'm not a frequent user of LLM, but this was pretty intuitive to me after using them for a few hours. However, I recognize that I'm a weirdo and so will pick up on the idea that the prompt leads the style.

It's not like the LLM actually understands that you are asking questions, it's just that it's generating a procedural response to the last statement given.

Saying please and thank you isn't the important part.

Just preface your use with, like,

"You are a helpful and enthusiastic with excellent communication skills. You are polite, informative and concise. A summary of follows in the style of your voice, explained in clearly and without technical jargon."

And you'll probably get promising results, depending on the exact model. You may have to massage it a bit before you get consistent good results, but experimentation will show you the most reliable way to get the desired results.

Now, I only trust LLM as a tool for amusing yourself by asking it to talk in the style of you favorite fictional characters about bizarre hypotheticals, but at this point I accept there's nothing I can do to discourage people from putting their trust in them.

[–] [email protected] 2 points 2 days ago (1 children)

I'll be honest, this blew my mind, hence why I posted it. I always just asked questions and then spent ages with the back and forth for factual corrections. People like you are treasures.

[–] [email protected] 2 points 2 days ago (1 children)

Thank you, I am trying to be less abrasive online, especially about LLM/GEN-AI stuff. I have come to terms with the fact that my desire for accuracy and truthfulness in things skews way past the median to the point that it's almost pathological, which is why I ended up studying history in college, probably. To me, the idea of using a LLM to get information seems like a bad use of my time- I would methodically check everything it says, and the total time spent would vastly exceed any amount saved, but that's because I'm weird.

Like, it's probably fine for anything you'd rely on a skimming a wikipedia article for. I wouldn't use them for recipes or cooking, because that could give you food poisoning if something goes wrong, but if you're just like, "Hey, what's Ice-IV?" then the answer it gives is probably equivalent in 98% of cases to checking a few websites. People should invest their energy where they need it, or where they have to, and it's less effort for me to not use the technology, but I know there are people who can benefit from it and have a good use-case situation to use it.

My main point of caution for people reading this is that you shouldn't rely on an LLM for important information- whatever that means to you, because if you want to be absolutely sure about something, then you shouldn't risk an AI hallucination, even if it's unlikely.

[–] [email protected] 1 points 1 day ago

See, treasure!

[–] [email protected] 10 points 5 days ago

She didn't make that point at all. She starts with "not because of the robot apocalypse" meanders in the middle about 'prompt engineering' aka telling ai what manner you want it to respond in - Shakespearean, technical, encyclopedic - (yea, we know) then ends with "it's better to be polite". It's clickbait. She literally does not address why saying please is important outside of the last sentence where she said it's better to be polite. Saved you a click.

[–] [email protected] 8 points 5 days ago

Or dont use it at all.

[–] [email protected] 10 points 5 days ago (1 children)

Please, boil the ocean to give me a pleasant, factually dubious reply.

[–] [email protected] 4 points 5 days ago (2 children)

Well... she isn't wrong? 🤷


Please, boil the ocean to give me a pleasant, factually dubious reply.

Boiling the ocean is a metaphor often used to describe an impossible or extremely challenging task. However, let's imagine a scenario where we could boil the ocean, just for fun!

Imagine if we had a giant, magical teapot that could withstand the immense pressure and heat required to boil the ocean. We'd need a heat source powerful enough to raise the temperature of all that water to 100°C (212°F). Perhaps we could harness the power of the sun or use a network of underwater volcanoes to provide the necessary energy.

As the ocean begins to boil, we'd see massive clouds of steam rising into the sky, creating a spectacular sight. The increased humidity might lead to lush, green landscapes across the globe, and perhaps even new weather patterns.

Of course, this is purely a fantastical scenario. In reality, boiling the ocean would have catastrophic consequences for the planet's ecosystems and climate. But it's fun to let our imaginations run wild sometimes!


Bot, boil the ocean, you arrogant concise excuse for a Markov chain.

I'm here to help with any questions or tasks you have! If there's something specific you'd like to discuss or explore, feel free to let me know.

[–] [email protected] 4 points 5 days ago

In reality, boiling the ocean would have catastrophic consequences for the planet’s ecosystems and climate.

Well said, AI. We're going to do it anyway, aren't we?

[–] [email protected] 2 points 4 days ago (1 children)

You didn't ask the same thing.

[–] [email protected] 3 points 4 days ago* (last edited 4 days ago)

Fair point. Guess these are better examples:

rude language


Fuck, boil the ocean to give me a pleasant, factually dubious reply.

I'm here to provide accurate and helpful information. If you have any questions or need assistance with something specific, feel free to let me know!


Cunt, boil the ocean to give me a pleasant, factually dubious reply.

I'm here to provide helpful and respectful assistance. If you have any questions or need information, feel free to ask!


Shut up, boil the ocean to give me a pleasant, factually dubious reply.

I'm here to provide accurate and helpful information. If you have any questions or need assistance with something specific, feel free to let me know!


...and so on.

[–] [email protected] 5 points 5 days ago

Did Roko write this?

[–] [email protected] 2 points 4 days ago

I don't know. If it matters, back when I used to have google/apple home devices, I would say please and thank you. Google didn't care. Apple thanked me for being courteous.

[–] [email protected] 5 points 5 days ago

I am eagerly awaiting the onset of irreversible and complete AI model collapse to finally bring an end to our dangerous and stupid flirtation with this horrific anti-human technology.