if you ever wonder how I write Pivot, it's a bit like this. The thing below is not a written text, it's a script for me to simulate spontaneity, so don't worry about the grammar or wording. But how are the ideas? And what have I missed?
(Imagine the text below with links to previous Pivots where I said a lotta this stuff.)
When some huge and stupid public AI disaster hits the news, AI pumpers will dive in to say stuff like “you have to admit, AI is here to stay.”
Well, no I don’t. Not unless you say just what you actually mean, when you say that. Like, what is the claim you’re making? Herpes is here to stay too, but you probably wouldn’t brag about it.
We’re talking about the generative AI stuff here. Chatbots. Image slop generators. That sorta thing.
What they’re really saying is give in. AI like it is right now in the bubble is just a permanent force that will reshape society in its image, so we have to give in to it now and do what the AI pumpers say. You know that’s what they really mean.
We get stuff like this egregious example from the Washington State school system. It stars with “AI is here to stay” then there’s a list of AI stuff to force on the kids assuming all of this works forever just like the biggest hype in the bubble. And that’s not true! [OSPI SLIDE]
If you ask why AI’s here to stay, they'll just recite promotional talking points. So ask them some really pointy questions about details.
Remember that a lot of people are super convinced by one really impressive demo. We have computers you can just talk to naturally now and have a conversation! That's legit amazing, actually! The whole field of natural language processing is 80% solved! The other 20% is where it's a lying idiot and probably can’t be fixed? That’s a bit of a problem in practice. Generative AI is all like that, it’s impressive demos with unfixable problems.
Sometimes they’ll claim chatbots are forever because machine learning works for X-ray scans. If they say that, they don’t know enough to make a coherent claim, and you’re wasting your time.
Grifters will try to use gotchas. Photoshop has AI in it, so you should let me post image slop! Office 365 has AI in it, so if you use Word you might as well be using AI! Spell check’s a kind of AI! These are all real examples. These guys are lying weasels and the correct answer is “go away”. At the least.
Are they saying the technology will surely get better because all technology improves? [WAVE HANDS] Will the hallucinating stop? Then they need evidence for that, cos it sure looks like the tech of generative AI is stuck at the top of its S-curve at 80% useful and hasn’t made any major breakthroughs in a couple of years. [o1 GRAPH] It’s an impressive demo, but the guy saying this will have to bring actual evidence it’s gonna make it to reliable product status. And we have no reason to think so.
Are they saying that OpenAI and its friends, all setting money on fire, will be around forever? Ha, no. That is not economically possible. Look through Ed Zitron’s numbers if you need a bludgeon. [ED Z SLIDE] [Ed Zitron]
These AI companies are machines for taking money from venture capitalists and setting it on fire. The chatbots are just the excuse to do that. The companies just are not sustainable businesses. Maybe after the collapse there’ll be a company that buys the name “OpenAI” and dances around wearing it like a skin.
Are they saying there’s a market for generative AI and so it’ll keep going when the bubble pops? Sure maybe there’ll be a market - but I’ve been saying for a while now, the prices will be 5x or 10x what they are now if it has to pay its way as a business.
Are they saying you can always run a local model at home? Sure, and about 0.0% of chatbot users do that. In 2025, the home models are painfully slow even on a high end box. No normal people are going to do this.
I’ve seen claims that the tools will still exist. I mean sure, the transformer architecture is actually useful for stuff. But mere existence isn’t much of a claim either.
Ya know, technologies linger forever. Crypto is still around serving the all important “crime is legal” market, but it’s radioactive for normal people. If you search for “AI is here to stay” on Twitter, you’ll see the guys who still have Bored Ape NFT avatars. Generative AI has a good chance of becoming as radioactive to the general public as crypto is. They’ll have to start calling the stuff that works “machine learning” again.
So. If someone says "AI is here to stay," nail them down on what the heck the precise claim is they're making. Details. Numbers. What do you mean by being here? What would failure mean? Get them to make their claim properly.
I mean, they won’t answer. They never answer. They never have a claim they can back up. They were just saying promotional mouth noises.
Now, I’ll make a prediction for you, give you an example. When, not if, the venture capitalists and their money pipeline go home and the chatbot prices multiply by ten, the market will collapse. There will be some small providers left. it will be technically not dead yet!! but the bubble will be extremely over. The number of people running an LLM at home will still be negligible.
It’s possible there will be something left after the bubble pops. AI boosters like saying it’s JUST LIKE the dot-com bubble!!! But i haven't really been convinced by the argument "Amazon lost money for years, so if OpenAI just sets money on fire then it must be Amazon."
Will inference costs — 80%-90% of compute load — come down? Sure, they’ll come down eventually. Will it be soon enough? Well, Nvidia’s Blackwell hasn’t been a good chip generation so they’re putting out more of their old generation chips while they try to get Blackwell production up. So it won’t be very soon.
So there you go. If you wanna say “but AI is here to stay!” tell us what you mean in detail. Stick your neck out. Give your reasons.
I'm gonna do the exact opposite of this ending quote and say AI will be gone forever after this bubble (a prediction I've hammered multiple times before),
First, the AI bubble has given plenty of credence to the motion that building a humanlike AI system (let alone superintelligence) is completely impossible, something I've talked about in a previous MoreWrite. Focusing on a specific wrinkle, the bubble has shown the power of imagination/creativity to be the exclusive domain of human/animal minds, with AI systems capable of producing only low-quality, uniquely AI-like garbage (commonly known as AI slop, or just slop for short).
Second, the bubble's widespread and varied harms have completely undermined any notion of "artificial intelligence" being value-neutral as a concept. The large-scale art theft/plagiarism committed to create the LLMs behind this bubble (Perplexity, ChatGPT, CrAIyon, Suno/Udio, etcetera), and the large-scale harms enabled by these LLMs (plagiarism/copyright infringement, worker layoffs/exploitation, enshittification), and the heavy use of LLMs for explicitly fascist means (which I've noted in a previous MoreWrite) have all provided plenty of credence to notions of AI as a concept being inherently unethical, and plenty of reason to treat use of/support of AI as an ethical failing.
ooh good, i'll add that generative AI will likely become as radioactive to the public as crypto is.