https://news.abnasia.org/blog/posts/en-microsoft-scales-back-ai-goals-because-almost-nobody-is-using-copilot-2732
This headline nailed it! Turns out, Microsoft just learned the hardest lesson in AI - distribution doesn’t beat usefulness 😳
Microsoft’s AI Copilot was supposed to be everywhere.
In Windows. In Office. In your workflow.
Turns out it’s mostly ignored.
Recent reports say Microsoft quietly cut internal Copilot sales targets by up to 50%.
Not because of vibes. Because of math.
→ Copilot ~14% market share → ChatGPT ~61% → Gemini sprinting into 2nd place
And this is with Microsoft’s insane advantage:
Windows + Office + Azure + OpenAI access 🤯
If that stack can’t force adoption, maybe the problem isn’t distribution. It’s value.
Enterprises tried Copilot. Piloted it. Demoed it. Bought licenses.
Then, employees opened ChatGPT in another tab.
Because most of today’s “AI agents” are confident interns with no context.
So when Microsoft says“70% of Fortune 500 have adopted Copilot”, what it really means is this:
Procurement bought it. Employees didn’t.
Most importantly, forcing AI into everything didn’t help.
People didn’t ask for:
→ AI in Paint → AI watching their documents → AI narrating PowerPoint like a hostage video
They asked for one thing: AI that actually saves time, or does something humans couldn’t do before.
Right now, Copilot does neither.
Some extra link:
https://www.youtube.com/watch?v=QF4VccxdNEg
Test Confirms Copilot Can’t Do What Microsoft’s Ad Shows - https://propakistani.pk/2025/12/20/test-confirms-copilot-cant-do-what-microsofts-ad-shows/
AI search engines fail accuracy test, study finds 60% error rate - https://www.techspot.com/news/107101-new-study-finds-ai-search-tools-60-percent.html
Nah, AI is super useful for coding. There were some early detractors, but pretty much no one avoids using it for coding now if they want a significant speed up.
No matter how clean your code is, there will be repetitive patterns that you won't want to abstract further, and AI at bare minimum speeds that up immensely.
You add things like having faster recall than manually searching for solutions a lot of the time, and AI powered conversions, and there is a very clear value proposition.
There are a lot of problems with AI implementations as they currently are etc etc, but lets not let anger be the death of nuance.
speak for fucking yourself!
I mean, this is common sentiment.
There are a few holdouts but very few people coding, especially on things which don't have extremely precise requirements are avoiding this.
Now that people showed a study that says the exact opposite, are you going to modify your position?
People who can't come up with their own arguments, so they just attempt to dogpile on an already bad argument, as the very study linked literally lists exactly that it should not be used to make the argument you're making are some of the most frustrating. Have your own thoughts ffs. We're on "fuck_ai" and you can't even be arsed to do that?
So that would be a "no" then.
Pretty much par for the course for an AI bro.
Calling everyone who doesnt have the same hysteric based, knee jerk reaction as you an ai bro is just immature counterproductive nonsense.
The fact that you use a study which explicitly says not to use it for the thing you use it for, but then claim its I that has the problem is almost comically ignorant.
It's buggy and not as fast as manual coding
I mean you can state that, but most disagree. We're very in as lemmy bubble here.
Manual coding is buggy too. If your non ai assisted code was buggy, so still will be your assisted code. I think the idea that its inherently a bug exponentializer sounds more like cope than grounded reality.
More than that, code focused llms can be much more efficient with the targeted focus and if someone desires, can be based on permissively licensed code.
Wasn’t there a recent METR study that found 20% decreased productivity with ai coding tools? Oddly enough, the people using the tools thought they were 20% faster.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
From my own experience, they can be useful until they aren’t… and if you don’t know what you’re doing they can output convincing but flawed or downright dangerous code or suggestions. I’m not sure if it saves me time or not. Im not doing front end web development anymore so maybe the stuff I’m working on now is too obscure for the current tools?
The "not as fast" thing is confirmed by a study, which the other reply to your comment links to:
Also vibe coding is unsutable for junior devs because junior devs don't have the skill level needed to debug AI code.
The study you linked that specifically says it cannot be used to confirm exactly what you are saying it confirms?