this post was submitted on 06 Feb 2026
478 points (99.2% liked)

Fuck AI

5629 readers
1353 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] vpol@feddit.uk 64 points 17 hours ago (1 children)

If there is evidence of a fraud - yes.

If I asked you for a big discount and you offered me 80% discount - I see no issue here. Doesn’t look like an “obvious mistake”.

[–] Denjin@feddit.uk 41 points 16 hours ago (3 children)

The OP went into more detail in the reddit comments:

Chatbot isn't supposed to be making financial decisions. It's supposed to be answering customer questions between 6pm and 9am when I'm not around.

It's worked fine for 6+ months, then this guy spent an hour chatting with it, talked it into showing how good it was at maths and percentages, diverted the conversation to percentage discounts off a theoretical order, then acted impressed by it.

The chatbot then generated him a completely fake discount code and an offer for 25% off, later rising to 80% off as it tried to impress him.

[–] flandish@lemmy.world 67 points 16 hours ago

deploy stupid code win stupid prizes.

[–] Nalivai@lemmy.world 14 points 13 hours ago

Don't give a flying fuck what you think your bot should do. Your public facing interface gives a discount, I take a discount, simple as.

[–] ThePantser@sh.itjust.works 35 points 16 hours ago (3 children)

Still sounds like the AI is an idiot and did and said thing it shouldn't. But it still did it and as a representative of a company should still be held to the same standards as an employee. Otherwise it's fraud. Nobody hacked the system, the customer was just chatting and the "employee" fucked up and the owner can take it out of their pay.... oh right it's a slave made to replace real paid humans.

[–] leftzero@lemmy.dbzer0.com 37 points 14 hours ago* (last edited 12 hours ago) (3 children)

The “AI” isn't an idiot.

It isn't even intelligence, nor, arguably, artificial (since LLM models are grown, not built).

It's just a fancy autocomplete engine simulating a conversation based on statistical information about language, but without any trace of comprehension of the words and sentences it's producing.

It's working as correctly as it possibly can, the business was simply scammed into using a tool (a toy, really) that by definition can't be suited for the job they intended it to do.

[–] illi@piefed.social 9 points 12 hours ago (1 children)

The “AI” isn't an idiot.

It is idiotic though

[–] leftzero@lemmy.dbzer0.com 6 points 12 hours ago

Yeah, quite.

Though, to be fair, the scammers and the LLMs themselves are pretty good at convincing their victims that the damn things are actually smart, to the point that some otherwise quite intelligent people have fallen for it.

And come to think of it, given that most investors have fallen hook line and sinker for the scam, if you're publicly traded catering to their idiotic whims and writing off the losses caused by the LLM might actually be more profitable, if most of your customers aren't smart enough to take advantage of your silliness...

[–] ThePantser@sh.itjust.works 11 points 14 hours ago

Artificial Idiot

[–] skarn@discuss.tchncs.de 1 points 8 hours ago (1 children)

since LLM models are grown, not built

What kind of a distinction is that?

[–] leftzero@lemmy.dbzer0.com 2 points 5 hours ago

I mean that no one designed or built the model, it's just a compressed representation of (the shape of) the data it's trained on.

If by artificial we mean something made by people, we could argue that this isn't (though by the same logic neither would a zip file).

[–] ricecake@sh.itjust.works 14 points 14 hours ago

Eh, there's the legal concept of someone being an agent of the company. It wasn't typically expected to take orders, nor was it tied into the order system it seems.

In the cases where the deal had to be honored, the bot had the ability to actually generate and place an order, and that was one of the primary things it did. The two cases that come to mind are a car dealership and an airline, where you could use it to actually place a vehicle order ornto find and buy flights.
As agents of the business, if they make a preposterous deal you're stuck with it.

A distinction can be made to stores where the person who comes up and offers to help you isn't an agent of the business. They can use the sales computer to find the price, and they can look for a discount, but they can't actually adjust the order price without a manager coming over to enter a code and do it.

In this case it sounds like someone did the equivalent of going to a best buy and talking to the person who helps you find the video games trying to get them to say something discount code-ish. Once they did, they said they wanted to redeem that coupon and threatened to sue.

It really hinges on if it was tied to the ordering system or not.

[–] Denjin@feddit.uk 4 points 16 hours ago

I don't disagree, but this is an issue of when/where it's appropriate to use an LLM to interact with customers and when they shouldn't. If you present an LLM to the public, it will get manipulated by people who are prepared to in order to get it to do something it shouldn't.

This also happens with human employees, but it's generally harder to do so it's less common. This sort of behaviour is called social engineering and is used by fraudsters and scammers to get people to do what they want, typically handing over their bank details, but the principal is the same, you're manipulating someone (or something in this case) into getting it do do something they/it shouldn't.

Just because we don't like the fact that the business owner deployed an LLM in a manner they probably shouldn't have, doesn't mean the customer isn't the one in the wrong and themself voided whatever contract they had through their actions. Whether it's a human or LLM on the other end of the chat doesn't actually make any difference.