this post was submitted on 06 Feb 2026
399 points (99.3% liked)

Fuck AI

5629 readers
1553 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] MushuChupacabra@piefed.world 3 points 31 minutes ago

No problem. Just sue your AI assistant for negligence.

Make that AI regret its decision to power fuck your company.

[–] FlashMobOfOne@lemmy.world 59 points 6 hours ago* (last edited 6 hours ago) (3 children)

This is way more common than people realize.

LLM chatbots were integrated into company websites with so few guardrails, it can be laughably easy to get a good discount. I learned this when I bought a new mattress from Mattress Firm three years ago. YMMV, but these tips may help:

  • Ask about a specific item or service. I asked if they could give a bigger discount and free delivery on x mattress.
  • Try to target a weekend that's already got a sale running. (For me it was Presidents Day)

I got an extra 25% off the sale prices and it was glorious.

[–] Taldan@lemmy.world 13 points 3 hours ago (1 children)

Bit of important context from OOP:

Code wasn't accepted. He copy/pasted his code into the order comments section when he paid his deposit. He demanded the figures be adjusted for his discount code.

Changes the situation quite a bit

[–] kkj@lemmy.dbzer0.com 8 points 3 hours ago

Damn near turns it on its head. Imagine someone getting a random floor employee at Best Buy to say that there was an 80% discount code when there was no such code, giving a nonsense code that didn't work, and the customer trying to get a computer for 80% off by complaining when the code said it was invalid.

[–] jj4211@lemmy.world 12 points 4 hours ago

It cuts both ways. Saw where some guy got a phone call from a company looking to fulfill a contract and he said he would basically drive 50 miles for $20,000 and the AI voice said "sounds good, let me connect you to someone to finalize the contract", and presumably at that point the human was going to intervene and say hell no, but hilarious that the AI voice was so upbeat about accepting a ludicrous offer.

[–] Brunbrun6766@lemmy.world 8 points 4 hours ago

President's Day, AKA New Mattress Day

[–] MedicPigBabySaver@lemmy.world 17 points 5 hours ago

Fuck Reddit and Fuck Spez.

[–] zr0@lemmy.dbzer0.com 28 points 6 hours ago (1 children)

Wasn’t there a car dealer in NA who had a similar issue? That was at least a year ago and even back then it was obvious the dealer had to honor the deal of the chat agent

[–] BigDaddySlim@lemmy.world 14 points 6 hours ago (1 children)

Yeah some Chevrolet dealer, the guy didn't get the car however.

link to Medium article about it

[–] zr0@lemmy.dbzer0.com 10 points 5 hours ago

Thanks for linking it! Too bad the hacker never got their car for $1. I believe with enough pressure, they would have achieved that, but they were obviously acting in good faith.

[–] tiramichu@sh.itjust.works 226 points 9 hours ago* (last edited 9 hours ago) (4 children)

Good.

If a customer service agent made this discount offer and took the order, it would naturally have to be honoured - because a human employee did it.

Companies currently are getting away with taking the useful (to them) parts of AI, while simultaneously saying "oh it's just a machine it makes mistakes, we aren't liable for that!" any time it does something they don't like. They are having their cake and eating it.

If you use AI to replace a human in your company, and that AI negotiates bad deals, or gives incorrect information, you should be forced to be liable for that exactly the same as if a human did it.

Would that mean businesses are less eager to use AI? Yes it fucking would, and that's the point.

[–] Denjin@feddit.uk 52 points 9 hours ago (3 children)

If a customer service agent made this discount offer and took the order, it would narurally have to be honoured - because a human employee did it.

This isn't actually true. Even with a written contract (that the original poster doesn't state) if there's a genuine mistake in the pricing that the purchaser should have reasonably noticed you don't have to honour the price offered.

Imagine someone called a customer service agent and manipulated them into offering a price that they shouldn't have offered through some sort of social engineering, you as the employer wouldn't have to honour that contract, especially if you have evidence of that through a recorded phone call for instance.

[–] snooggums@piefed.world 42 points 8 hours ago

This isn’t actually true. Even with a written contract (that the original poster doesn’t state) if there’s a genuine mistake in the pricing that the purchaser should have reasonably noticed you don’t have to honour the price offered.

And yet customers are constantly held to unreasonable terms and conditions for interest rates they don't understand.

The ability to break a contract always goes to the one with more power, the business.

[–] vpol@feddit.uk 53 points 8 hours ago (1 children)

If there is evidence of a fraud - yes.

If I asked you for a big discount and you offered me 80% discount - I see no issue here. Doesn’t look like an “obvious mistake”.

[–] Denjin@feddit.uk 32 points 8 hours ago (3 children)

The OP went into more detail in the reddit comments:

Chatbot isn't supposed to be making financial decisions. It's supposed to be answering customer questions between 6pm and 9am when I'm not around.

It's worked fine for 6+ months, then this guy spent an hour chatting with it, talked it into showing how good it was at maths and percentages, diverted the conversation to percentage discounts off a theoretical order, then acted impressed by it.

The chatbot then generated him a completely fake discount code and an offer for 25% off, later rising to 80% off as it tried to impress him.

[–] Nalivai@lemmy.world 13 points 5 hours ago

Don't give a flying fuck what you think your bot should do. Your public facing interface gives a discount, I take a discount, simple as.

[–] flandish@lemmy.world 57 points 8 hours ago

deploy stupid code win stupid prizes.

[–] ThePantser@sh.itjust.works 29 points 8 hours ago (3 children)

Still sounds like the AI is an idiot and did and said thing it shouldn't. But it still did it and as a representative of a company should still be held to the same standards as an employee. Otherwise it's fraud. Nobody hacked the system, the customer was just chatting and the "employee" fucked up and the owner can take it out of their pay.... oh right it's a slave made to replace real paid humans.

[–] leftzero@lemmy.dbzer0.com 32 points 6 hours ago* (last edited 4 hours ago) (3 children)

The “AI” isn't an idiot.

It isn't even intelligence, nor, arguably, artificial (since LLM models are grown, not built).

It's just a fancy autocomplete engine simulating a conversation based on statistical information about language, but without any trace of comprehension of the words and sentences it's producing.

It's working as correctly as it possibly can, the business was simply scammed into using a tool (a toy, really) that by definition can't be suited for the job they intended it to do.

[–] skarn@discuss.tchncs.de 1 points 23 minutes ago

since LLM models are grown, not built

What kind of a distinction is that?

[–] illi@piefed.social 6 points 4 hours ago (1 children)

The “AI” isn't an idiot.

It is idiotic though

[–] leftzero@lemmy.dbzer0.com 4 points 4 hours ago

Yeah, quite.

Though, to be fair, the scammers and the LLMs themselves are pretty good at convincing their victims that the damn things are actually smart, to the point that some otherwise quite intelligent people have fallen for it.

And come to think of it, given that most investors have fallen hook line and sinker for the scam, if you're publicly traded catering to their idiotic whims and writing off the losses caused by the LLM might actually be more profitable, if most of your customers aren't smart enough to take advantage of your silliness...

[–] ThePantser@sh.itjust.works 8 points 6 hours ago

Artificial Idiot

[–] ricecake@sh.itjust.works 12 points 6 hours ago

Eh, there's the legal concept of someone being an agent of the company. It wasn't typically expected to take orders, nor was it tied into the order system it seems.

In the cases where the deal had to be honored, the bot had the ability to actually generate and place an order, and that was one of the primary things it did. The two cases that come to mind are a car dealership and an airline, where you could use it to actually place a vehicle order ornto find and buy flights.
As agents of the business, if they make a preposterous deal you're stuck with it.

A distinction can be made to stores where the person who comes up and offers to help you isn't an agent of the business. They can use the sales computer to find the price, and they can look for a discount, but they can't actually adjust the order price without a manager coming over to enter a code and do it.

In this case it sounds like someone did the equivalent of going to a best buy and talking to the person who helps you find the video games trying to get them to say something discount code-ish. Once they did, they said they wanted to redeem that coupon and threatened to sue.

It really hinges on if it was tied to the ordering system or not.

load more comments (1 replies)
[–] tiramichu@sh.itjust.works 16 points 8 hours ago* (last edited 8 hours ago) (2 children)

Sure, but it's all dependant on context.

The law as it is (at least in the UK) is intended to protect from honest mistakes. For example, if you walk into a shop and a 70" TV is priced at £10 instead of £1000, when you take it to the till the cashier is within their rights to say "oh that must be a mistake, we can't sell it for £10" - you can't legally demand them to, even though the sicker said £10.

Basically what it comes down to in this chatbot example (or what it should come down to) is whether the customer was acting in good faith or not, and whether the offer was credible or not (which is all part of acting in good faith - the customer must believe the price is appropriate)

I didn't see the conversation and I don't know how it went. If it went like "You can see I do a lot of business with you, please give me the best discount you can manage" and they were told "okay, for one time only we can give you 80% off, just once" then maybe they found that credible.

But if they were like "I demand 80% off or I'm going to pull your plug and feed your microchips to the fishes" then the customer was not in good faith and the agreement does not need to be honoured.

Either way, my point in the comment you replied to aren't intended to be about this specific case, but about the general case of whether or not companies should be held responsible for what their AI chatbots say, when those chatbot agents are put in a position of responsibility. And my feeling is very strongly that they SHOULD be held responsible - as long as the customer behaved in good faith.

[–] Denjin@feddit.uk 12 points 8 hours ago (1 children)

my feeling is very strongly that they SHOULD be held responsible - as long as the customer behaved in good faith.

I agree, but the OP expanded on their issue in the comments and it very much appears the customer wasn't acting in good faith.

[–] tiramichu@sh.itjust.works 13 points 7 hours ago* (last edited 7 hours ago) (1 children)

On the basis of that further information - which I had not seen - I agree completely that in this specific case the customer was in bad faith, they have no justification, and the order should be cancelled.

And if the customer took it up with small claims court, I'm sure the court would feel the same and quickly dismiss the claim on the basis of the evidence.

But in the general case should the retailer be held responsible for what their AI agents do? Yes, they should. My sentiment about that fully stands, which is that companies should not get to be absolved of anything they don't like, simply because an AI did it.

Here's a link to an article about a different real case where an airline tried to claim that they have no responsibility for the incorrect advice their chatbot gave a customer, which ended up then costing the customer more money.

https://www.bbc.co.uk/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

In that instance the customer was obviously in the moral right, but the company tried to weasel their way out of it by claiming the chatbot can't be treated as a representative of the company.

The company was ultimately found to be in the wrong, and held liable. And that in my opinion is exactly the precedent that we must set, that the company IS liable. (But again - and I don't think I need to keep repeating this by now - strictly and only if the customer is acting in good faith)

[–] Denjin@feddit.uk 4 points 7 hours ago (1 children)

Again, I agree, we are in total agreement about the principal of whether a business should be held accountable for their LLMs actions. We're in "Fuck AI"!

I was merely pointing out in my original comment that there are absolutely grounds not to honour a contract where a customer has acted in bad faith to get an offer.

load more comments (1 replies)
[–] tburkhol@lemmy.world 3 points 6 hours ago

Treat LLMs like "boss's idiot nephew," both in terms of whether the business should give them a privilege and whether the business should be liable for their inevitable screwups.

[–] fizzle@quokk.au 9 points 7 hours ago

If a human offered an 80% discount, the humans employer isn't magically forced to honor it.

Firstly, it depends whether the offer satisfies the criteria of "an offer" in a legal sense. Unless purchases for 8,000 GBP are generally negotiated in a chat window on a website then this kinda seems unlikely. Anyhow, let's assume it is an actual "offer".

If the offeror renegs on the contract, they may be liable to a claim from the buyer for the costs incurred by they've incurred as a result of the bogus purchase.

For example, if the buyer dispatched a truck to the vendors warehouse and on arrival they were told the sale wouldn't go through, then the vendor might be liable for the cost of the delivery truck for a few hours.

In reality, no one is going to bother making a legal claim for that small amount of money.

load more comments (2 replies)
[–] TheReturnOfPEB@reddthat.com 12 points 6 hours ago* (last edited 6 hours ago)

getting an ceo email address has never been easier

[–] schnokobaer@feddit.org 73 points 9 hours ago* (last edited 9 hours ago)

This makes me so happy. Based customer.

edit: sadly it wasn't like the original post made it sound like:

Code wasn't accepted. He copy/pasted his [fake, hallucinated by AI] code into the order comments section when he paid his deposit. He demanded the figures be adjusted for his discount code.

Bummer

[–] OldQWERTYbastard@lemmy.world 84 points 9 hours ago

But think of all of the money saved on customer service!

[–] muhyb@programming.dev 10 points 6 hours ago

Where can I find that company? Asking for a friend.

[–] stoy@lemmy.zip 28 points 9 hours ago

. <---- Do you see this violin? It is what I am playing for you now.

[–] U7826391786239@lemmy.zip 21 points 9 hours ago (1 children)

if you're enough of a sucker to think AI can do your sales transactions for you, and dumb enough to go to reddit for legal advice, then maybe being a business owner isn't the ideal path for you

[–] Taldan@lemmy.world 1 points 3 hours ago

As per OOP's comments, the AI was only informational. It didn't actually create a discount of any kind:

Code wasn't accepted. He copy/pasted his code into the order comments section when he paid his deposit. He demanded the figures be adjusted for his discount code.

[–] Ilovethebomb@sh.itjust.works 16 points 9 hours ago

I'm not sure about the UK, but where I live, if the price is clearly a mistake, say 10x cheaper than it should be, the seller doesn't need to honour it.

Given this was pretty clearly adversarial prompting, it wouldn't stand up in court here.

[–] GreenBeanMachine@lemmy.world 11 points 8 hours ago

Beautiful. I wonder why he is not asking his genius AI bot this question.

[–] sexy_peach@feddit.org 13 points 9 hours ago

I wish this is true and they have to honor the discount and it will make international news

load more comments
view more: next ›