this post was submitted on 14 Feb 2026
334 points (99.1% liked)

Technology

81161 readers
4638 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The evolution of OpenAI’s mission statement.

OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” according to its 2023 mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”

While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed “safely” from its mission statement, among other changes. That change in wording coincided with its transformation from a nonprofit organization into a business increasingly focused on profits.

OpenAI currently faces several lawsuits related to its products’ safety, making this change newsworthy. Many of the plaintiffs suing the AI company allege psychological manipulation, wrongful death and assisted suicide, while others have filed negligence claims.

As a scholar of nonprofit accountability and the governance of social enterprises, I see the deletion of the word “safely” from its mission statement as a significant shift that has largely gone unreported – outside highly specialized outlets.

And I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm.

you are viewing a single comment's thread
view the rest of the comments
[–] runsmooth@kopitalk.net 3 points 1 day ago* (last edited 1 day ago)

OpenAI is the same as any other publicly traded corporation: it serves society, but this service primarily focuses on the shareholders. We're looking at a vehicle designed to take money, and give it to shareholders. (private in this case or otherwise)

Focus on growth of data centres at public expense, AI slop, the circular nature of some of the investments going into AI, and the productivity (or lack of), are part of it. We are not looking at any exceptionalism. AI isn't unique in its capability for catastrophic harm. What we eat and drink can easily be on that list.

AI and these American companies, just want the money train to continue unabated, and any regulation to go away.