this post was submitted on 10 Mar 2026
677 points (99.3% liked)

Technology

82518 readers
4244 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

you are viewing a single comment's thread
view the rest of the comments
[–] merc@sh.itjust.works 53 points 14 hours ago (3 children)

What is AI good at? Creating thousands of lines of code that look plausibly correct in seconds.

What are humans bad at? Reviewing changes containing thousands of lines of plausibly correct code.

This is a great way to force senior devs to take the blame for things. But, if they actually want to avoid outages rather than just assign blame to them, they'll need to submit small, efficient changes that the submitter understands and can explain clearly. Wouldn't it be simpler just to say "No AI"?

[–] Joeffect@lemmy.world 7 points 9 hours ago* (last edited 9 hours ago) (1 children)

If you ask a writer what is Ai good for? They will say it's good for art. But never use it for writing, because it's terrible at it.

If you ask a artist what is Ai good for? They will say it's good for writing. but never use it for art, because it's terrible at it.

[–] Mongostein@lemmy.ca 2 points 8 hours ago (1 children)

Conclusion… it’s good at neither… or am I missing your point?

[–] Overzeetop@sopuli.xyz 3 points 5 hours ago

The output looks good to people who are poorly versed in the segment for which AI is being asked to perform, but often inefficient or fails in ways that an expert in the field would never miss.

---ignore this part, I'm just rambling from here on Depending on the context, you'll almost certainly get something that looks correct on first glance, especially if you're not an expert. If you're an expert, you wouldn't need to ask for such a task and, if you did to save time, you'd probably end up adjusting, correcting, or fixing several things to produce a production-ready output. I use it regularly for code because the last language I had any training in proper syntax was Fortran 77. And eventually the simple tasks I ask it to code for me work. I've asked it to do some excel calculations (I'm not an excel expert, I do a lot of mathematic manipulation in custom sheets) and some of them work, but most are either wildly convoluted or relay on obscure calls/functions rather than simply using standard logic and mathematic operations which are easy to edit and change. I've also asked it to do some graphical illustration (because I'm not a graphic artist) and it has produced nice looking illustrations with zero basis in reality - i.e. "draw me an outline of Scotland in the style you'd see on a tourist map and label, with a star, these four cities". It produced what I would expect an average American would estimate the outline of Scotland looked like and was equally as accurate with the location of the four cities (i.e. utterly incorrect).

[–] Earthman_Jim@lemmy.zip 17 points 13 hours ago* (last edited 13 hours ago)

AI's greatest feature in the eyes of the Epstein class is the ability to shift responsibility. People will do all kinds of fucked up shit if they can shift the blame to someone else, and AI is the perfect bag holder.

Just ask the school of little girls in Iran which were likely targets picked by AI with out of date information about it being a barracks. Why bother confirming the target with current intel from the ground when no one's going to take the blame anyway?

[–] monkeyslikebananas2@lemmy.world 1 points 13 hours ago (1 children)

Or I suppose add extra work by walking an AI tool through making small incremental changes.

[–] merc@sh.itjust.works 4 points 10 hours ago (1 children)

In my experience, LLMs suck at making smart, small changes. To know how to do that they need to "understand" the entire codebase, and that's expensive.

[–] monkeyslikebananas2@lemmy.world 2 points 10 hours ago

Yeah that’s what I mean by extra work. I can make the change myself or I can argue with claude code until it does what I want.