BlueMonday1984
Thought 1: This is the kind of incident that makes politicians vote for a law named after a dead kid. It behooves us to think of what kind of legislation could actually address the problem without becoming a clusterfuck that worsens everyone’s life, including children’s. cough #OnlineSafetyAct cough
A complete ban on chatbots/LLMs would be enough. These things have basically zero ethical use case, it'd be a net positive if they were legally wiped from existence.
Thought 2: Hey, all you guys using LLMs to replace opinion surveys or do “research” on social interactions because it’s cheaper than gathering real data… How many human beings talk like the suicide-encouragement bot here?
Against my better judgment, I decided to follow that link and check the quotes. Thankfully, there was nobody defending this - calling for a ban on AI, calling for ChatGPT's shutdown, calling for Sam Altman to be charged, pretty much everyone was out for blood.
Thought 3: Oh, remember when OpenAI paid $10 million to buy off the American Federation of Teachers? Because Pepperidge Farm still has that browser tab open. Every school administrator who breathes a word about bringing “AI” into the classroom deserves to get lit up by parents asking why they are embracing suicide tech.

If I had written this article I’d just be telling people to ban python in coding education.
I'd be happy to hear your reasons why.
The NYT's reported on the suicide of a 16-year old boy, noting how ChatGPT assisted him in said suicide and deterred him from seeking help.
This is not the first time a chatbot's driven someone to suicide. And I fully expect it won't be the last.
Textbook case of anthropomorphisation from The Guardian, trying to posit that AI systems are capable of feeling pain.
You want my unsolicited opinion, machines cannot feel pain/emotion, only imitate it, and the rise of LLMs have made this crystal clear. Much like with being creative or making art, feeling genuine emotion is the exclusive domain of human/animal minds.
The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.
Given they're going out of their way to cause as much damage as possible (throwing billions into the AI money pit, boiling oceans of water and generating tons of CO~2~, looting the commons through Biblical levels of plagiarism, and destroying the commons by flooding the zone with AI-generated shit), they're arguably en route to proving Yud right in the dumbest way possible.
Not by creating a genuine AGI that turns malevolent and kills everyone, but in destroying the foundations of civilization and making the world damn-nigh uninhabitable.
Someone tried Adobe's new Generative Fill "feature" (just the latest development in Adobe's infatuation with AI) with the prompt "take this elf lady out of the scene", and the results were...interesting:

There's also an option to rate whatever the fill gets you, which I can absolutely see being used to sabotage the "feature".
New Public Good newsletter, talking about YouTube editing users' uploads without their permission/knowledge.
I’m now realizing most programmers haven’t done a manual labor task that’s important. Or lab science outside of maybe high school biology. And the complete lack of ability to put oneself in the shoes of another makes my rebuttals fall flat. To them everything is a nail and anything could be a hammer if it gets them paid to say so. Moving fast and breaking things works everywhere always.
On a semi-related sidenote, part of me feels that the AI bubble has turned programming into a bit of a cultural punchline.
On one front, the stench of Eau de Tech Asshole that AI creates has definitely rubbed off on the field, and all the programmers who worked at OpenAI et al. have likely painted it as complicit in the bubble's harms.
On another front, the tech industry's relentless hype around AI, combined with its myriad failures (both comical and nightmarish) have cast significant doubt on the judgment of tech as a whole (which has rubbed off on programming as well) - for issues of artistic judgment specifically, the slop-nami's given people an easy way to dismiss their statements out of hand.

I can't really think of anything to supplant it either. The only addition I can think of would be some complementary arts education, to build students' creative abilities and further highlight the expressive elements of software.