Oh no! How did this happen? ...I mean, how exactly did this happen? Is there a tutorial on how other engineers at other companies can replicate this?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
That's what happens when you are renting your very skills from a company. You'll hone nothing and you'll be happy.
Good twist on that one.
This is the nightmare scenario for any team that built their whole workflow around a cloud API. No warning, no clear reason, no real support path. just a Google form and 60 people sitting on their hands.
The uncomfortable truth is that "terms of service" at this scale is just "we can pull the rug whenever." Anthropic isn't unique here either. OpenAI, Google, all of them have the same opaque enforcement problem. It's a big part of why I've been building tools that run on local inference by default. Not because cloud is bad, but because your users shouldn't be one vague policy complaint away from a complete outage.
Local gives you continuity even when the upstream disappears.
https://bannedbyanthropic.com/
I believe the word is capricious. Everything cloud based is at the whim of someone else.
There are ways to mitigate against that, but ultimately if it's not yours...it's not yours.
Just continue coding using the natural neural networks in the brains of those 60 employees until the problem has been resolved and/or another AI provider selected. It's not like Claude invented coding. Sure, it's a pretty useful tool. But it is possible to research obscure APIs and develop software manually.
Shut up. Nobody wants that. Work psh
Isn't it hilarious how capitalists are trying to force us all into literal 'nobody wants to work anymore' territory and we're not even onboard
Because they want the "efficiency" of firing shitloads of people, but not the onus of actually paying their taxes so those people can have UBI.
Many commenters were quick to point out that he should never have coupled his company so closely with Claude to begin with, a reasonable critique by itself. However, it's worth noting that the story could have easily been the same if it had instead been Amazon Web Services, Azure, or an authentication provider like Okta.
You are so close, you almost got it!
You're going to see a lot more of this and other forms of fuckery as the VC money dries up.
https://www.wheresyoured.at/four-horsemen-of-the-aipocalypse/
Yup. when the purveyors have to finally charge for what it costs, these fanboys will flee quickly
This is true for any company using 3rd party services. I worked for one that used a 3rd party messaging service to send out mfa texts to users. The company was hacked and went offline, so we couldnt send any mfa codes.... and of course, they had no plan b.
In business, always have a backup
Or... taps mic... don't fucking rely on AI for your business! Play stupid games, win stupid prizes.
We're in a period where the tools, agentic systems in this case, are gated by large companies.
This is like if IBM or Cray in the 60s through 90s only allowed rental of mainframes that they owned, and they can cut you off.
That wasn't the case then, but just like Google shutting down the father's entire Google account cause the pediatric doctor wanted a photo of the kid who had a rash to see if they needed to be brought into the ER or a cream, then got his phone (Google Fi), email (Gmail), and all his paperwork backups (drive) cut off... When you don't own the infrastructure you live at the whims of things you can not even appeal to.
This is a story about people and companies putting their entire business workflows in the hands of big tech who really don't care about anyone.
So, AI drama aside, the moment your life or business is fully dependent on an unreliable partner, this is what happens.
This has nothing to do with AI.
Don't rely on software or workflows or really anything that you can't easily switch if said company decides to stop doing business with you.
If you do, it better be a strategic partnership where something like this can't happen.
In this case, their workflows should have been AI provider agnostic or had a way to continue functioning if Claude went down.
This definitely has to do with AI. Because CEOs are losing their stupid minds over it. I agree with you in principle, but let's not lose sight of the fact that this specific technology is what CEOs are drooling over. Even in my company I had to tell the owner/CEO, "What problem are you trying to solve with AI?" His response was his mouth being open with a dumb look on his face.
So no business should rely on AI (or, to your point, any software) that it becomes detrimental to their business or workforce should that access be revoked.
Yes, this has everything to do with AI, because this is an AI vendor locking out a customer from their ordinary workflow.
At the same time, this is a generalizable example not limited to AI, where any form of vendor lock-in on a critical business function becomes a potential point of failure when the vendor drops the customer or stops working. It's true of a cloud provider, an email provider, an ISP, any software provider that can revoke access/authority, or even non-tech vendors like a landlord or a temp agency or an electric utility.
60 employees who can’t be productive without AI?
And this is progress?
Your point is well-taken, but this is also exactly why AI reliance is dangerous. Anyone who sees this should realize the precarity of relying on products that can just be locked away from you.
Windows 11, Onedrive, Intel Management Engine, Google accounts, ...
France's government is actively leaving Windows for Linux as you read this. I'm about to follow suit, too.
Like Gmail? Google drive? Slack?
I'm not defending AI, but I can come up with >10 products that would absolutely cripple the company I work at if the provider suddenly says "Soz, terms of service violation".
Vendor reliance is dangerous. That doesn't just apply to AI. If the company in OP's message had both Claude and Gemini they'd been okay, so the problem isn't with AI explicitly - the problem is with reliance on services that are critical for workflows, and providers being able to change their mind at a moment's notice.
In any case, leaving aside where the problem is, the idea that 60 employees can't use Natural Intelligence to do their jobs means there's something really wrong with that company...
kind of a difference between infrastructure for daily operations and AI though? well, there should be....
My company is pivoting hard to Claude for everything, and besides the fact that it's irritating as fuck to use, it has me worried about shenanigans like in this article. For almost 50 years, they've had a "no reliance upon 3rd party platforms for core functions," but since they hired an AI apologist to the C-suite, all that has gone out the window in a matter of months.
Got me thinking I should warm up my resume...
Got me thinking I should warm up my resume...
Don’t wait, start now. The job market is a nightmare and finding one that isn’t being consumed by incompetent C-level AI FOMO is getting harder every day. I work on life-saving medical equipment and AI is being pushed on us for things that could literally kill people if not done correctly. Why would anyone spend 30 minutes using AI and risking people’s lives when I can just write it myself in 5 or 10? Madness. Complete, society-scale madness. The people pushing AI have no fucking idea what they are doing or how engineering works. People are going to die.
This makes me so happy about my employer. I'm sysadmin for a newspaper.
We had an all-company test run 2 weeks ago to answer the question "What if we're hacked?"
Turns out we're able to produce a printed and online newspaper within a work day if NONE of our normal IT systems (hardware, software, e-mail, network) are accessible.
Everything we need has a redundancy that's kept completely physically separated from the network until the day it's needed.
A business that has a workable disaster plan? Well done.
Ironically, this is a great case study to illustrate the value of Chinese models. They've released a number that are on par with Claude's latest models under "open weight" licenses that would allow you to run them yourselves if you wanted to, or to hire some other third party to provide API access. It wouldn't matter what the original company's "usage policy" is in that case.
There are a couple of Western open models that aren't bad either, but they tend to be aimed at a smaller and simpler use case than Claude.
What models exactly? And what kind of hardware do you need to run them? Also, are there any GitHub repos that replicate Claude projects?
The one currently making the headlines is Kimi K2.6, on the benchmarks it's just short of Opus 4.7. It's a trillion-parameter model so it won't run on desktop computers, but it's something a company could run on reasonably buildable servers for their own use.
For local use, I've been finding Qwen3.6's 35B parameter model to be uncannily good. Gemma4 is also good, that's one of the Western ones. These models won't do the sort of heavy lifting that Opus can do but you don't need that heavy lifting for all tasks.
They are not as capable as opus, and that sadly matters.
Kimi K2.6 is close to Opus. It beats Opus 4.6 on the benchmarks, so if Opus 4.6 was sufficient for your needs then Kimi K2.6 should be on par.
If you literally can't access Opus because Anthropic cut you off I suspect that matters more than a slight difference in benchmarks.
Just another form of vendor lock-in. If your business model is mostly/entirely dependent on an external party, that should be a well understood risk.
The only people winning are selling shovels
Dude, it's 2026. We don't sell shovels, we sell shovel subscriptions.
I am responsible for gathering information on AI to determine whether we should use it for our next project. The ask was to use it for a critical process task. Immediately in my head I was like "no, we are not using AI at all", but I obviously need quantifiable data. This is just another thing to add to my list of why using AI for core processes is one of the stupidest things you could ever do.
Oh my God, my Eliza 2.0 chatbot is blocked. I'm experiencing withdrawals already, my productivity is down 76.8%.
Either they didn't pay, they found an exploit, or, more likely, someone at Claude was reviewing their conversations. Take note, any business that cares about IP or confidentiality.
I'll bring two theories to the table.
a) they got caught distilling for their own models b) they re-sold their $200/mo plans as APIs
60 employees were dead in the water, as reportedly their daily workflows rely on the AI assistant's
Is that a joke? 60 employees do not know how to do their job? This is not Anthropic's problem.
I throw any bullshit task into AI. I'm to produce a monthly report on my strategic wins and goals for the next month. I throw it in AI, don't read it, paste it in the Google doc, send it to the PM who sends it to my boss who also doesn't read it (or uses AI to read it).
Now I know how to write it but writing this report would take me a day or two if I carefully did it, or 3 mins with AI.
That's one way to save costs.