this post was submitted on 14 Feb 2026
331 points (99.1% liked)

Technology

81161 readers
4642 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The evolution of OpenAI’s mission statement.

OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” according to its 2023 mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”

While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed “safely” from its mission statement, among other changes. That change in wording coincided with its transformation from a nonprofit organization into a business increasingly focused on profits.

OpenAI currently faces several lawsuits related to its products’ safety, making this change newsworthy. Many of the plaintiffs suing the AI company allege psychological manipulation, wrongful death and assisted suicide, while others have filed negligence claims.

As a scholar of nonprofit accountability and the governance of social enterprises, I see the deletion of the word “safely” from its mission statement as a significant shift that has largely gone unreported – outside highly specialized outlets.

And I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm.

top 21 comments
sorted by: hot top controversial new old
[–] jjlinux@lemmy.zip 4 points 3 hours ago

What is there to test? The answer to this is so clear that just asking that question seems pretty dumb to me.

[–] 1984@lemmy.today 5 points 4 hours ago* (last edited 4 hours ago)

Datacenter popping up everywhere, sucking energy and water like entire cities, economy crashing...

We are literally watching dystopia being built around us. Its an interesting experience to watch.

Im happy I got to experience the 80s. That was peak humanity. Lots of cool movies and music and before big tech ruined humanity. People could date and create a family, buy a house.

[–] tomiant@piefed.social 25 points 22 hours ago (1 children)

a test for whether AI serves society or shareholders

Gee I wonder which one's gonna win.

[–] Rentlar@lemmy.ca 3 points 22 hours ago

Who has more money? OpenAI needs buttloads of it right about now from all the promises they have made.

[–] FauxLiving@lemmy.world 14 points 22 hours ago

~~Don't be evil~~

[–] Tehdastehdas@piefed.social 33 points 1 day ago (3 children)

benefit humanity as a whole

The Borg from Star Trek fills that requirement. My headcanon is that the people from its home planet made an AGI with the given goal of “benefiting humanity as a whole”, and it maximised that goal by building the Borg - making humanity as a whole by connecting them to a hive mind and forcibly assimilating all other species to benefit humanity as a whole.

[–] CosmoNova@lemmy.world 3 points 5 hours ago

So can we expect ChatGPT to become a shareholder hive mind then?

[–] karashta@piefed.social 14 points 1 day ago

Oh what a cool take!

I always liked to think of the Borg as being almost more like an emergent property of a certain level and type of organic/inorganic interfacing.

So it's not that one species was the Borg, all are in potentia. And every time a species commits the same error or reaches the correct level of "perfection", they find themselves in a universe where they were already existent.

Like a small hive self-creates, opens its mental ears and is already subsumed into the greater Borg whose mind it finds.

I like that it adds almost a whole new level of arrogance to their statement, "Resistance is futile." They believe it not only because they are about to physically assimilate you, but because every advance you make brings you potentially closer to being Borg through your own missteps.

[–] monkeyslikebananas2@lemmy.world 5 points 1 day ago (1 children)

Gotta be honest, I thought that was the reason.

[–] thingAmaBob@lemmy.world 1 points 3 hours ago

I heard someone report Thiel talking about something similar to this, but I could have misunderstood. Either way, listening to him speak; he definitely is cuckoo bananas.

[–] brsrklf@jlai.lu 23 points 1 day ago (1 children)

"Safely" was already an empty promise to begin with, given how LLMs work.

So someone just thought "our investors don't value safety, let's get rid of that on the blurb". They are probably correct.

[–] panda_abyss@lemmy.ca 11 points 1 day ago

When they had their schism over Altman a couple years ago safety died. 

[–] palordrolap@fedia.io 17 points 1 day ago

Dodge v. Ford Motor Company, 1919.

This case found and entrenched in US law that the primary purpose of a corporation is to operate in the interests of its shareholders.

Therefore OpenAI, based in California, would be under threat of lawsuit if they didn't do that.

This goose is already cooked.

[–] Paranoidfactoid@lemmy.world 8 points 23 hours ago

EVERYTHING IS FINE

[–] Diplomjodler3@lemmy.world 12 points 1 day ago

... its new structure is a test for whether AI serves society or shareholders

Gee, I can't wait to see the results of this test!

[–] winni@piefed.social 2 points 20 hours ago

ai serves society? you are boozing brake fluid

[–] runsmooth@kopitalk.net 3 points 23 hours ago* (last edited 23 hours ago)

OpenAI is the same as any other publicly traded corporation: it serves society, but this service primarily focuses on the shareholders. We're looking at a vehicle designed to take money, and give it to shareholders. (private in this case or otherwise)

Focus on growth of data centres at public expense, AI slop, the circular nature of some of the investments going into AI, and the productivity (or lack of), are part of it. We are not looking at any exceptionalism. AI isn't unique in its capability for catastrophic harm. What we eat and drink can easily be on that list.

AI and these American companies, just want the money train to continue unabated, and any regulation to go away.

[–] MalReynolds@piefed.social 2 points 21 hours ago

We're so damn lucky that LLMs are a dead end (diminishing returns on scaling even after years of hunting) and they just pivoted to the biggest Ponzi scheme ever, bad as that is (and the economic depression it will cause), it pales into insignificance compared with the damage these fucks would do with AGI (or goddess forbid ASI with the alignment they would try to give it).

[–] melsaskca@lemmy.ca 4 points 1 day ago

The government only cares about your safety when they need to push laws through the system so their rich buddies can save a dime. "But it's for your own good", they say. What about the children?

[–] silverneedle@lemmy.ca 2 points 1 day ago* (last edited 1 day ago)

lol, lmao even

Either they think they're evil 1337 h4xx0r overlords that are gonna enslave the planet or they genuinely think their statistical apparati do anything worthwile outside of making statistics on by now 70% other statistical machines.

+just wait until AI bros about Zip compression being more efficient at classifying than "AIs".

[–] Lembot_0006@programming.dev 2 points 1 day ago

The marketing department removed some meaningless word from the marketing bla-bla-bla brochure nobody was even supposed to read.

WE ALL ARE GOING TO DIE!!!