this post was submitted on 28 Feb 2026
1306 points (99.6% liked)

Technology

82000 readers
3419 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] awaysaway@sh.itjust.works 3 points 54 minutes ago
[–] ArmchairAce1944@discuss.online 1 points 32 minutes ago

I last used chatGPT in 2024. Never found it satisfying.

[–] CanadianMade@lemmy.ca 28 points 3 hours ago (1 children)
[–] Sazruk@lemmy.wtf 6 points 3 hours ago
[–] raskal@sh.itjust.works 42 points 4 hours ago (1 children)

Canada recently has had its 2nd worst school shooting ever. The killer had many interactions with ChatGPT that warranted banning her account. A whistleblower has claimed that they wanted to inform Canada's police force of these comments but were denied by ChatGPT's management.

They had a chance to stop the death of 8 people, most of which were young children, but failed to do anything.

FUCK CHATGPT AND THOSE BASTARDS THAT RUN IT

[–] jagungal@aussie.zone 2 points 49 minutes ago

Why would you not contact police? I understand that this is a systemic failure and blame does not lie with that employee but if others me I'd rather be out of a job than have those deaths on my conscience for the rest of my life.

[–] lmdnw@lemmy.world 53 points 5 hours ago (1 children)

Sam Altman is objectively a bad human being.

[–] ChaoticEntropy@feddit.uk 24 points 5 hours ago (2 children)

Sam Altman is just some fail upward money guy, he's been eventually removed from basically every prior position he has held.

[–] jaennaet@sopuli.xyz 2 points 1 hour ago

That doesn't mean he can't also be an objectively bad human being

[–] PolarKraken@lemmy.dbzer0.com 10 points 4 hours ago

Seems like his career has largely been lying and making impossible promises, so. The folks who do that well always manage to exit the stage before the magic tincture is revealed to just be piss 🤷‍♂️

[–] glitchdx@lemmy.world 11 points 4 hours ago

Glad that I've switched platforms. sam altman should probably be in prison or something.

I've been using Venice lately, they claim (I have done zero research to determine if this is true) that they're privacy focused. They do run uncensored models, which is a big plus.

That said, I find myself using the lying machine less these days. It was like a fun video game when I first got my hands on it, entertaining for a while, and I'm moving on. Maybe I'm not imaginative enough to use it to the fullest potential, but I'm having more fulfillment actually writing and actually drawing (even though I am very bad at both).

[–] SpiceDealer@lemmy.dbzer0.com 13 points 4 hours ago

I'd argue that an armed uprising would have a greater effect than a smaller internet-based boycott but I'm just some random guy on some niche internet forum so... who's to say?

[–] ScoffingLizard@lemmy.dbzer0.com 14 points 4 hours ago

I am canceling my subscription now. Fuckers.

[–] I_Has_A_Hat@lemmy.world 15 points 5 hours ago* (last edited 5 hours ago)

Yea, I can just imagine OpenAI is really struggling with their business decision.

On the one hand, they have multi-billion dollar contracts with the US Military that will make them all fabulously wealthy beyond their wildest dreams.

On the other, they have a handful of individuals leaving that might amount to a few thousand dollars of lost revenue.

Gosh, it must sure have been a tough choice.

It’s because this administration wants to use AI/ML to create a list of domestic strike targets based on people who have said things dumpy doesn’t like.

[–] boogiebored@lemmy.world 15 points 5 hours ago (3 children)

So many companies are cozying up to the fascist regime as this is the late stage of capitalism.

A list of some of these companies: https://x.com/vxunderground/status/2024200204296061089?s=20

[–] Burghler@sh.itjust.works 17 points 5 hours ago* (last edited 5 hours ago)

List is hosted on a facism aligned owner's misinformation site

Ok

[–] SomeRandomNoob@discuss.tchncs.de 10 points 5 hours ago

he/she says, and posts a link to one of those companies ...

[–] floofloof@lemmy.ca 6 points 5 hours ago

Your link isn't valid.

[–] cloudskater@piefed.blahaj.zone 65 points 8 hours ago

I cannot believe this is what it took for a boycott to go more mainstream. Tell me more about how so many people have no respect for the environment or the artists who's work they gleefully consume.

[–] theuniqueone@lemmy.dbzer0.com 55 points 8 hours ago (1 children)

Anthropic still is scum for being completely fine helping America oppress the rest of the world.

[–] XLE@piefed.social 9 points 4 hours ago* (last edited 4 hours ago) (2 children)

Anthropic is scum, accepting money from foreign dictators, forcing their software on minorities while insisting it was conscious and had emotions just like them, praising the Trump administration, making up scary stories to get more funding...

...In many ways, they're worse than OpenAI. They're just running with the same playbook that Sam Altman used to use to pretend he was a good guy.

[–] Vlyn@lemmy.zip 2 points 2 hours ago (1 children)

I mean they praised the Trump administration for benefiting their business, which is.. fair? I guess?

If you do ask Claude Sonnet 4.6 about Trump it leans quite negative, as it should.

[–] XLE@piefed.social 1 points 1 hour ago

I missed when sucking up to the Trump administration and echoing Cold War style nationalism was "fair". If that's the case, OpenAI's behavior is fair.

Fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems.

Our strong preference is to continue to serve the Department and our warfighters

Dario "Warfighter" Amodei

[–] Hackworth@piefed.ca 2 points 4 hours ago (1 children)

They insisted Claude was human?

[–] XLE@piefed.social 6 points 4 hours ago (1 children)

Sorry, not quite, but close. From 404 media

When users confronted Clinton with their concerns, he brushed them off, said he would not submit to mob rule, and explained that AIs have emotions and that tech firms were working to create a new form of sentience, according to Discord logs and conversations with members of the group.

[–] Hackworth@piefed.ca 3 points 4 hours ago (1 children)

Oh, that guy! To be fair, that's one employee, not Anthropic's actions or position. You mentioned forcing their software on minorities while insisting it was better than it was, and I was getting OLPC flashbacks. But Anthropic looking for funding in the UAE and Qatar is shitty. I can't seem to find anything about whether or not they went through with those contracts.

[–] XLE@piefed.social 6 points 4 hours ago* (last edited 4 hours ago) (1 children)

Jason Clinton is Anthropic’s Deputy Chief Information Security Officer. That means Jason knew better, and he was using his position as a moderator (and supposedly a security expert) to try gaslighting a vulnerable minority into believing his favorite toy was "secure" when it was not.

[–] Hackworth@piefed.ca 2 points 3 hours ago (1 children)

I mean, I'm not gonna defend him. But fucking up a discord that you're a mod of isn't really in the same ballpark as taking money from dictators or directing fully autonomous strikes. Also, from the read, it really sounds like that Deputy CISO was a prime example of cyber-psychosis, or AI mania, or whatever we've decided to call it. And I assume he is part of the same vulnerable minority?

[–] XLE@piefed.social 2 points 3 hours ago* (last edited 3 hours ago) (1 children)

Every example we have of Anthropic's behavior paints a picture of an immoral company that pretends to be moral. It's bad enough that they continue doing harm, but then they dress it up with phrases like "AI Safety" and "Information Security". (And every press release they create to describe how scary good their system is, tends to be followed up by a sudden cash infusion from an openly morally bankrupt company like Google or Amazon.)

I reserve zero empathy for the people on the abuser side of an abusive dynamic. Maybe Elon Musk is autistic too. I don't really care. Only Moloch knows their hearts. I'll judge them for their actions.

[–] Hackworth@piefed.ca 3 points 3 hours ago (1 children)

I did find an update on that funding, btw. Anthropic already took money from Qatar (the QIA), but the amount isn't known - likely around $100M. The UAE has yet to happen, but if does, it would be "hundreds of millions".

[–] XLE@piefed.social 2 points 2 hours ago

Interesting. I appreciate you doing the digging to check. It's frustrating that people spent so much time looking at the fact that Anthropic had an uncrossed red line, they didn't look at all the red lines that were already crossed - in the very article about those supposed red lines. Such is PR I guess.

I suppose you saw that "He Will Not Divide Us 2.0" letter from OpenAI and Google employees who promised to stand behind Anthropic. Never mind the fact OpenAI split.... Doesn't anybody know Google already does mass surveillance of Americans?

...I ramble.

[–] perishthethought@piefed.social 135 points 11 hours ago (3 children)

mainstream

I'll believe that when my sisters start saying this. Till then, it's just us privacy fans screaming in a dark cave, enjoying the echo.

load more comments (3 replies)
[–] panda_abyss@lemmy.ca 181 points 11 hours ago (2 children)

They went from standing with Anthropic to throwing them under the bus real fast

[–] floofloof@lemmy.ca 169 points 11 hours ago* (last edited 11 hours ago) (3 children)
load more comments (3 replies)
load more comments (1 replies)
load more comments
view more: next ›