this post was submitted on 28 Jan 2026
1670 points (99.0% liked)

Technology

79576 readers
3425 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dantheclamman@lemmy.world 42 points 1 day ago (12 children)

I think LLMs are fine for specific uses. A useful technology for brainstorming, debugging code, generic code examples, etc. People are just weary of oligarchs mandating how we use technology. We want to be customers but they want to instead shape how we work, as if we are livestock

[–] NotMyOldRedditName@lemmy.world 14 points 1 day ago (10 children)

Right? Like let me choose if and when I want to use it. Don't shove it down our throats and then complain when we get upset or don't use it how you want us to use it. We'll use it however we want to use it, not you.

[–] NotMyOldRedditName@lemmy.world 17 points 1 day ago* (last edited 1 day ago) (8 children)

I should further add - don't fucking use it in places it's not capable of properly functioning and then trying to deflect the blame on the AI from yourself, like what Air Canada did.

https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

When Air Canada's chatbot gave incorrect information to a traveller, the airline argued its chatbot is "responsible for its own actions".

Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada's chatbot promised a discount that wasn't available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother's funeral and then apply for a bereavement fare after the fact.

According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn't offer the discount. Instead, the airline said the chatbot was a "separate legal entity that is responsible for its own actions". Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.

The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees

[–] NotAnonymousAtAll@feddit.org 6 points 1 day ago (2 children)

ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees

That is a tiny fraction of a rounding error for a company that size. And it doesn't come anywhere near being just compensation for the stress and loss of time it likely caused.

There should be some kind of general punitive "you tried to screw over a customer or the general public" fee defined as a fraction of the companies' revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.

[–] merc@sh.itjust.works 6 points 1 day ago

It's a tiny amount, but it sets an important precedent. Not only Air Canada, but every company in Canada is now going to have to follow that precedent. It means that if a chatbot in Canada says something, the presumption is that the chatbot is speaking for the company.

It would have been a disaster to have any other ruling. It would have meant that the chatbot was now an accountability sink. No matter what the chatbot said, it would have been the chatbot's fault. With this ruling, it's the other way around. People can assume that the chatbot speaks for the company (the same way they would with a human rep) and sue the company for damages if they're misled by the chatbot. That's excellent for users, and also excellent to slow down chatbot adoption, because the company is now on the hook for its hallucinations, not the end-user.

Definitely agree, there should have been some punitive damages for making them go through that while they were mourning.

load more comments (5 replies)
load more comments (6 replies)
load more comments (7 replies)