MoreWrite
post bits of your writing and links to stuff you’ve written here for constructive criticism.
if you post anything here try to specify what kind of feedback you would like. For example, are you looking for a critique of your assertions, creative feedback, or an unbiased editorial review?
if OP specifies what kind of feedback they'd like, please respect it. If they don't specify, don't take it as an invite to debate the semantics of what they are writing about. Honest feedback isn’t required to be nice, but don’t be an asshole.
I wrote this and I want to know what else someone would want to know, if I'm dead wrong on any of it, etc.
the channel: https://www.youtube.com/@pivottoai
How I do the Pivot to AI YouTube videos
Cat: site news
Img: a title card, or me looking horrified at s/t
The Pivot to AI video series is getting views and subscribers, which is nice!
It’s a couple of hours to write a post and expand it into a script, fifteen minutes to record if nothing breaks or falls over, and another hour or two to clean up the audio and assemble the video, including screenshots of sources.
The philosophy is: your content is what matters, everything else is a bonus. Put in effort, not money. We’re making punk rock here. I did fanzines in the ’80s and books in the 2010s on the same principles.
Fortunately, cheap consumer electronics is good enough in 2025. Here’s how I do videos on a budget of zero.
-- read more --
Camera: I use my phone, which is the best camera in the house. It does 1080p on the front-facing camera. This records H.264 video and 96kpbs AAC sound, which is fine for voice.
I try to do everything in a single take. Fancy is my enemy. “We’ll fix it in post” is a film-maker phrase meaning “well, that was a cock-up.” Everything you fix in post takes ten to twenty times longer than getting it right the first time.
Camera mount: anything that will mount a phone stably. The loved one and kid got me a ring light for Christmas and told me to rant on TikTok when something annoyed me. Unfortunately, ring lights are incompatible with wearing glasses - you get virtual googly eyes projected perfectly onto your lenses - but the phone holder bit still works well mounted on the desk. I also have an older phone mount for a camera tripod.
Microphone: Your video can be iffy, but your sound has to be good.
I use my Jabra Evolve 40 headset, which was designed for work Zoom calls. This is not great, but it’ll do. I turn the bass up in post-production and it sounds better.
I really want a Røde - the loved one has a Røde M3, which is all the mic you need for a remarkable range of use cases, for around £80. The M3 really needs phantom power - if you use a battery, you will forget it’s switched on and it’ll go flat - which is another £20 box. But Røde make an enormous variety of good podcasting mics of professional quality and you’re unlikely to go wrong. A Røde is the next piece of kit I’ll spend money on.
Lighting: A 9W daylight bulb above, a 9W daylight bulb to my left, and a 9W yellow bulb (“warm white”) to my right are doing the job so far. The lights to my left and right are in clip-on gooseneck mounts attached to the shelves.
Teleprompter: Elegant Teleprompter on Android. This puts a floating window over your camera app. It’s free with unobtrusive ads and the developer is very in-touch with his user base and likes to fix problems.
Sound: I edit the sound in Audacity, which is free, open source, and just works. If you know what you want to do, you can probably do it. I normalise the very quiet phone audio, do a noise reduction, pump up the bass 4.5dB, then go through the audio de-umming and removing breath noise, the latter being the main reason I want a microphone that isn’t up my nose. Then compression, then it’s ready.
Screen shots: I take these in Firefox and edit them in GIMP. I also make the title cards in GIMP. Just make things 1280×720.
Video editing: I use OpenShot, which is a bit open-source, but it basically works and it’s free. I get the raw video, the cleaned audio, the various still images, and the theme music, and assemble the final video. Export at 720p as “MP4 (H.264 va).” I could go to 1080p, but this is a talking head show and you don’t need my nose hairs that sharp.
ffmpeg: this is the Swiss army knife of video and about as fiddly to use. I’m very into tweaking things in ffmpeg.
OpenShot had trouble with the Nature video, which kept freezing in rendering the 7-minute raw file late in the video. I ran the raw video through ffmpeg to add a key frame every ten frames, which is probably overkill, but it worked, so I’ve kept doing it. I do not recommend you do this unless you specifically have this problem. But, to generate an oversized video with no sound (because the cleaned-up audio track is separate) and a ton of extra key frames:
ffmpeg -i input.mp4 -vcodec libx264 -x264-params keyint=10:scenecut=0 -crf 14 -an output.mp4
I can also do things like stretch a clip to 1.2× length to make speech clearer:
ffmpeg -i fatima.mp4 -vf "setpts=1.2*PTS" -af "atempo=0.8333" fatima2.mp4
When you finally give in to the urge for fancy with equipment, pro podcaster kit is super cheap on Alibaba and hence Amazon. Go through surplus listings and see if you’re lucky.
It's been a good minute since I took a shot at predicting how the AI bubble (and its burst) is gonna play out. This is less a dedicated post and more a compilation of the bajillion thoughts I've had about this.
- Throwing the Humanities a Bone
Right off the bat, I suspect the arts/humanities will gain a degree of begrudging respect at the direct expense of tech's public image taking a nosedive. My primary reason for this is due to the slop-nami and its wide-spread impact on the Internet at large.
On one end, you have an utter tidal wave of AI-generated slop flooding basically every corner of daily life, giving us a nonstop torrent of misinformation (political or otherwise) and inhumanly shit "artwork" which clogs the Internet with garbage and drowns out human voices wherever it can, acting as an omnipresent annoyance at best and a direct threat to people's livelihoods at worst.
On the other end, you have the AI bros responsible for/complicit in this slop-nami, who uncritically praise whatever garbage gen-AI creates, relentlessly hype up its "creative" qualities and relentlessly doomsay about the incoming Artificial General Intelligence that will wipe out humanity/solve all of humanity's problems/God-Knows-Fucking-What.
Combined, these two aspects of the bubble paint a picture of tech as an utterly insipid and artless field, full of soulless dilletantes who are incapable of making or understanding art at best and actively hostile to it at worst.
If that new image of tech takes root in the public consciousness, its gonna make the arts/humanities look a fair bit better by comparison. Sure, the fields are still gonna have to deal with the stigma of being a "useless degree" but its better than your degree being taken as an indictment of you as a person.
Confidence: Low. This is pure gut instinct (and probably hope) talking from watching the slop-nami first-hand - whilst it could force people to appreciate the arts a bit more, nothing's stopping the slop-nami from devaluing the arts even fucking more.
- Smells Like Fash Spirit
On another front entirely, I expect this bubble will leave the tech industry with a reputation as a Nazi bar - a reputation that will linger even after Trump and his cronies are thrown out of office.
Silicon Valley's sucking up to Trump and DOGE's AI-Powered^tm^ annihilation of the state is doing wonders on this particular front, but fascists' fascination with slop and TESCREAL being fash to its core are likely helping build this image as well.
Referencing an earlier comment of mine, AI as a concept will likely get hit with the stench of fash as well - considering the TESCREAL movement partially powering this bubble is fascist as hell, that seems likely.
Confidence: High. Unlike Prediction 1, I've got some solid evidence to suggest tech's picking up a reputation for fascism. Mainly Silicon Valley's jackbooted goose-stepping to Trump's tune.
In the spirit of our earlier "happy computer memories" thread, I'll open one for happy book memories. What's a book you read that occupies a warm-and-fuzzy spot in your memory? What book calls you back to the first time you read it, the way the smell of a bakery brings back a conversation with a friend?
As a child, I was into mystery stories and Ancient Egypt both (not to mention dinosaurs and deep-sea animals and...). So, for a gift one year I got an omnibus set of the first three Amelia Peabody novels. Then I read the rest of the series, and then new ones kept coming out. I was off at science camp one summer when He Shall Thunder in the Sky hit the bookstores. I don't think I knew of it in advance, but I snapped it up and read it in one long summer afternoon with a bottle of soda and a bag of cookies.
On Toxic Productivity
Ever since the beginning of the AI slopnami, but more specifically since the public diaspora about the technology began and a large number of people
or dare I say the majority?
started to hate the plagiarism machine that Sam Altman and his friends unleashed upon the world, I wondered who exactly are the people that violently defend this technology?
Yes, on the one hand you have big corporations. That's the obvious one. Of course OpenAI and Nvidia are happy with how things are going, because they either make money from it or hope they can milk the venture capitalists even further before they finally exit-scam before the burst of the bubble. My question, however, is, who are the normal people who fanboy over the latest iteration of ChatGPT or Midjourney or whatever iteration of spicy auto-completion tickles their fancy at the moment. Who, in a time where the public opinion on not just Artificial "Intelligence" but the tech sector as a whole is at an all-time low with artists and creatives hating it with every fiber of their beings, decides to die on the hill that endless repetitive plagiarized slop is the future that's not just inevitable but desirable?
You will hear again and again that the AI crowd is just the reboot of the crypto bros from a few years ago. Those people who spent unreasonable amounts of moneys on links to bad monkey JPGEs hosted on the Ethereum blockchain^1, and that is probably true. But if we draw the Venn diagram here, then there is a third crowd that has a surprisingly large overlap with the other two. Maybe more so with AI than with crypto, but ultimately with both, and that is the infamous sphere of productivity addicts.
Now, there is story to tell about the role of productivity in the age of hustle culture and whether it's even something we should view in positive light in the first place, but that's not what I am trying to do here. I think everyone must decide for themselves whether the concept of productivity as it is presented by people on YouTube and the internet in general is something they like, dislike, or outright hate, and I believe that a lot of creators have good intention when they make videos about their study techniques and note-taking approaches. I use and have used tools like Roam Research^2 myself, and when I was a student, I had my own ways of organizing notes and finding them again. Even for casual writing I use some of these tools, for example to create a personal wiki of characters for stories, and so on. And that's fine.
No, the productivity sphere on YouTube in particular, but probably also on other social media platforms I am less familiar with, has a dark side, one which I like to call Toxic Productivity. The difference between toxic productivity people and the normal creators is that the toxic crowd takes it to such extremes that not only everything in their lives has to be maximally productive, but they also
and that's where the toxic part comes in
look down on people who are less productive than them (however they decide to measure that).
Productivity YouTube isn't new. The trend has been ongoing for over a decade at this point and has evolved from students giving study tips to people making full-blown businesses out of it, and the latter is where the problem lies.
A few of you probably saw some parallels with another group of people: podcast bros. In no medium is toxic productivity as prominent as in the podcast sphere, I'd say. Podcasts about getting rich quick, opening a successful business, or creating your own successful brand are a dime a dozen. The parallel between these people and AI grifters isn't lost on people, with TSMC executive calling Altman a "podcasting bro"^3 when he came in begging for $7 trillion [sic] to further finance his ocean-boiling money sink.
If I asked you to name one creator who personifies what I have described as toxic productivity up until this point, I am sure I would hear many different names. For me, the poster child of toxic productivity is however Ali Abdaal^4.
Productivity on 3.5x Speed
Depending on how terminally online you are on YouTube, you might have never heard that name, and I would not blame you. In a world of people like Andrew Tate, who arguably caters to the same people, namely those striving towards self-improvement, who want to become rich and successful, and who are gullible enough to dump money on everyone who tells them they came solve their problem with a snap of their fingers, Abdaal isn't a big fish at first glance. All things considered, Tate is more toxic than most of the others in the sphere combined, and probably more dangerous too, but he is also more obviously a scam. Tate's audience is very clearly not the average college student but lonely young men who hate women and the world, and I don't want to get into that here. Abdaal, on the other hand, can be considered the polar opposite of that. His book, aptly names "Feel Good Productivity", makes that clear. He's not here to sell you a toxic worldview like Tate. He doesn't want to make you hate women and society. No, he's a nice person and friendly and inclusive.
But let's back up for a moment: who exactly is Ali Abdaal?
On his website, he writes:
Hey, I'm Ali Abdaal. I'm an ex-doctor turned YouTuber, Podcaster, entrepreneur, and author (and I dabble with the occasional investment too).
Abdaal started out as one of the aforementioned college YouTubers who shared study tips on the platform. He was an aspiring doctor attending Cambridge University teaching things like spaced repetition^5 in his videos, a largely uncontroversial learning technique. And if he had stayed with that type of content, I would not even be mentioning him in this article, but as is evident from his introductory sentence on the website, that's not how it went. He's an "ex-doctor turned YouTuber", and on top of that, an "entrepreneur" (I will conveniently ignore the part about "occasional investment" here, but we'll get back to that).
The career of Abdaal is a great example of the pipeline form harmless productivity tips into the realm of toxic productivity, because as he steered from study tips towards helping you maximizing your productivity every waking moment of your life, his videos became different too.
One noteworthy thing about it is that Abdaal was one of the first who did this, so not the whole cult of toxic productivity grew alongside him. Whether or not he's directly responsible or at least largely influential isn't someting I can answer here, but it's at least something to keep in mind.
The most infamous (and since removed) example of this is "How To Watch TV Productively". You might have furrowed your eyebrows at that title and rightfully so. I think even Abdaal himself must have noticed that he went a little too far with that one since he took it down or set it to private a while later, but there is still a Reddit thread discussing it^6 online as well as a video from creator Fr0nzP.^7 which talks about many of the same point I am in this text that has clips of it.
Watching anime and watching TV in general feels to me like kind of a waste of time. And because I worship the ultra-productivity and the only thing I care about is productivity, everything in my life has to be productive, like, you know, listening to audio books at 3.5x speed, [...]
This provides a perfect example of Abdaal's mindset. He's not concerned about studying anymore, or about helping you study, he's gone down the path of "ultra-productivity". Before I continue with the TV video, let me show you another one if his: "How I Type REALLY Fast (156 Words per Minute)"^8, and this one is still up and you can enjoy it for yourself. He opens the video up claiming that "having a ridiculously fast typing speed is one of [his] superpowers in life" (0:10) and that "anyone can become at least twice as productive if [they] just increase[d] [their] typing speed" (0:18).
Now, unless you're working as a court reporter (in which case you are probably using stenography anyway) or writing stream-of-consciousness, I argue that this statement is false, because typing faster than you think is probably not the productivity boost that Abdaal thinks it is, but even if you accept his words as true, it demonstrates again his attitude towards life. He even states that things like "going on websites" and "sending messages to friends, [and] all of that stuff becomes quicker therefore you'll become more productive" (1:45).
I think you can see a pattern here. Abdaal believes that cramming more things into every day is the key aspect of productivity. It's probably already questionable whether that's true for studying (because writing more notes in less time doesn't mean you understand the concepts, so shouldn't you study smarter and not faster?), but applying that same approach to your hobbies is just completely insane
which brings us back to the TV video.
So, how does Abdaal watch anime and TV productively you might ask? Well, the fact that his listens to audio books on 3.5x speed should give you an idea.
[...] normally what I do is, I'll just speed-speed-speed-speed-speed-speed-speed up until it gets to an interesting point, and I'll speed it as fast as I can so I can still keep up with it.
And because he obviously can't hear what's being said when watching at 3.5x speed anymore, he's speed-reading subtitles.
I can't be the only one wondering whether he gets any enjoyment out of consuming media this way, can I? Especially because he applies this advice not to lectures of tutorial material or other videos for which this might work, but films and series that have been created to be watched as a recreational activity. How productivity-brained must you be to judge media in a way like this, and what does that even mean? What even is an "interesting point" for someone with a view like that? We don't have to guess, because Abdaal tells us himself that the parts he doesn't have to watch at normal speed are "when it gets to [...] building character [...] kind-of stuff".
Yes, you read that right: Abdaal thinks that character building in a work of fiction is the stuff you can speed through because it's not interesting or not important. Which makes me wonder why he's even watching any of that to begin with. Sure, there are people who don't enjoy fiction, and who rather spend their time differently, and that's fine, but this reeks of someone who feels guilty for wanting to watch anime or TV, and who needs to find a way to justify doing so by fitting it into their distorted world view in which everything has to serve a productive purpose. And even more so, it shows that Abdaal does not view recreation or relaxation as a productive activity in the first place. He thinks that if you take time out of your day to do something you enjoy that does not directly lead to some sort of tangible gain, monetary or otherwise, it is not worth doing and you're lazy.
That is the definition of toxic productivity.
Of course, this also completely invalidates the work of people who make the shows he skips through (and probably do so for a living). Fr0nzP. puts it best in his video.
Actually thinking that any artistic decision such as pauses, music, or nuances in facial expressions can be disregarded as long as you pick up the plot via subtitles is utterly stupid. (2:34--2:47)
What is Art? (Baby don't hurt me...)
If that's not enough to show you how Abdaal completely disregards art because it doesn't fit with his his worship of "ultra-productivity", Fr0nzP. cites another one of his videos, "How I Read 100 Books a Year - 8 Tips of Reading More"^9, in which he shits on classic literature and dismisses literature students as examples of people he looks down on between the lines. He further exemplifies this in his later video "How to 'Read' 1000 Books a Year" (let that title sink in for a moment). In that video, he does admittedly make a good point that it's find to not finish a book if you don't enjoy it and that you should not feel pressured by society or your peers to read something you don't want to really read. The rest of it, however, is a weird conglomeration of product placement (he namedrops brands left and right, mentioning how they're not sponsoring the video but are super amazing and life-changing) as well as advocating speed-reading and skimming
again, something that does not work well with fiction, as some of the comments underneath the video point out.
Also, if you look for a drinking game that will absolutely wreck you: take a shot every time he mentions Amazon or the Kindle in that video.
So, what books does he read and recommend then? Take a wild guess.
No, seriously, before you read on, think about the contents of this article, which is the overlap between the toxic productivity sphere and AI bros, and just try to guess one book he recommends.
Ready?
In titled "The Best Book I've Ever Read about Morality"^10 he sings the praise of "What We Owe The Future" by William MacAskill, but the video is basically a twelve-minute mental exercise in jerking off to Effective Altruism, because of course it is. We learn that Abdaal is not only a card-carrying member of EA but also donates 10% of his yearly income to it (or to charities which fit their criteria for being worthwhile).
There's also a shout-out to MacAskill on his Twitter, (fittingly after a long series of posts where he stealth-promotes ChatGPT and how it can boost your productivity), complete with a drive-by mention of AI x-risk.
To learn more about the risks of AI and other long-term risks to humanity, check out moral philosopher @willmacaskill's excellent book What We Owe the Future. Or alternatively, check out my brief summary of the book on my YouTube channel^11
In another video, "8 Lessons I Learned From Elon Musk"^12 Abdaal fawns over Elon and how successful he is. One of the lessons in there is ironically that "reading is the best thing ever", mentioning how Elon's idea of founding SpaceX without being an engineer or having a clue about rocket science is that he read about it. I find it quite condescending, though, to make a claim like that but put an asterisk at the end that means, but only if you read non-fiction, because otherwise you're wasting your time.
Part-Time Hustle Academy
We could leave Ali Abdaal here and focus on someone else, but I promised you above that we would return to his investment tips. Much like with his productivity-related content, he started out harmless and uncontroversial by just giving basic tips about dipping your toes into investment by checking out ETFs and not being afraid of the stock market. But also as with his other videos, his focus shifted and became stranger.
Making money is the second biggest topic on his channel, and yes, that
is of course part of toxic productivity as we have established above,
because everything needs to have a tangible benefit and what benefit is
more tangible than actually making bank. So, he has videos about
generating income streams and making more money than you peers and, of
course, Bitcoin In his defense, he doesn't appear to be a crypto
bro at least, and he does list the controversies around Bitcoin in this
video and makes some wishy-washy takes about how everyone must decide
for themselves whether they want to invest in it.
But what's the end-goal of all that? What if you want to be as productive as Abdaal himself? Well, good news, there's a solution for you and it's called the "Part-Time YouTuber Academy"^14.
We've condensed 7+ years of YouTube experience into programmes designed to help you on your YouTube journey.
We learned lessons the hard way, so you don't have to...
This is basically Abdaal's version of every podcasting bro's "If you want to be successful, you need to become like me!" course, and we've seen plenty of those in the last decade. So how much does this thing cost, you might ask? Well, at the time of writing the fee for Abdaal's class is $995. And if that's too expensive for you and you don't really want to make YouTube videos, he also has offers on platforms like Skillshare, like the "Productivity Masterclass", the "Notion Masterclass", "Triple Your Typing Speed" (here we go again), or "How To Cook Productively" (no, I'm not joking).^15
Well, I don't have a thousand bucks to waste but lucky for us there are people who did and reviewed the course wo we can take a look at what it's actually like. YouTuber TyFrom99 in his video "Creator Courses: Selling Dreams as Products"^16 talks about the Part-Time YouTuber Academy. He also summarizes Abdaal's whole brand in a very concise way.
Ali is a YouTuber that has basically popularized what I like to call the "productivity cult". Almost every channel you see centered around the topic of productivity is influenced directly by Ali [...]. It's clear he carved the genre out almost single-handedly. (14:35)
He also mentions that the productivity sphere is a toxic space and that it's basically a "nerdified and systematized [version of] the hustle culture that people like Andrew Tate promote" (14:57). Oh gee, maybe his content isn't that different from the likes of Andrew Tate but only flavored in a different way?
Tate sells you dreams. He sells you success. He sells you being like him, which is rich (probably) and handsome (uhh, about that...) and successful with women (wait a moment!), and the only thing you need to do for that is fork over some of your cash and subscribe to his classes. He calls his grift "Hustler's University" and apparently makes millions from it.^17 Sounds familiar?
Unlike Tate, however, Abdaal is upfront about that nothing he teaches in his course can't be found out by just searching the internet, so at least he's honest. He's selling you curated and condensed information that you would otherwise have to dig up yourself, or, in other words, he sells you time which you can use more productively. "We learned lessons the hard way, so you don't have to" indeed.
Fr0nzP., who also delves into the PTYA in his video^18 is less generous and says that "Ali makes over $130k each month, with 5--10 hours of effort each week", and argues "that giving the impression of productivity as a recipe of arriving at those numbers is dishonest and even borders on fraud" (12:33--12:47). Looking at some of the channels, he comes to the same conclusion as TyFrom99: namely, that most of the channels who took the class didn't see much success from it. Moreover, all the engagement these channels get seems to be from other people who took the class. Abdaal's quantity-over-quality approach (don't forget that in his opinion productivity just means doing more in less time) shows here, too.
Final Thoughts
So what's the takeaway of all of his? In the beginning I promised to make a point about people who are into AI and who defend this technology despite its obvious problems. It's exactly the people who are swooned by the weird takes of Ali Abdaal, who define being productive as cramming as much activity into their day as humanly possible, who don't give a rat's ass about art and don't assign value to it, and who don't view time spent recreationally as worth their while who feel drawn to the promises of Altman too. Yes, he is a podcasting bro indeed, because behind all his thinly-veiled technofascist TESCREAL talk he is selling you productivity too. ChatGPT can work for you, it can save you time, it can do the tasks you don't want to do!
What are these tasks, though? In the Culture Series by Iain M. Banks^19, one of the great science-fiction series of the modern time that deals with AI as a major cultural factor, and that's not understood by any AI bro who's read it, boring menial tasks are automated so that humanity, under the leadership of the benign AIs, can spend it's time engaging in art and things they enjoy. Altman's future has nothing of that, because none of it is of value to him. Instead, art is automated to people can work more and make more money.
It's no wonder that big companies like Adobe subscribe to this ideology and try to force it upon their customers^20, because they are invested in Altman's bubble, but who are the small people who do? The self-proclaimed "AI artists" who shout that AI democratizes art and finally makes it accessible to the masses (never mind that there's more art tutorials on YouTube than productivity shit and that there are few skills who need as little investment as art, because you can get started with a pencil you steal from IKEA and the back of your last unpaid electricity bill if you really want)?
It seems counterproductive to peddle AI as a small creator at this point when it seems as if more and more consumers are turned off by products that use it^21, but that isn't what these people see. For them, it's like Ali Abdaal's advice that you first need to vomit out 100 videos on YouTube and then can start worrying about quality. These first 100 videos, or I guess artworks in this case, would've been part of your training at any point in time prior to 2022, but now the automatic plagiarizer can make them for you in an hour, and you can put them on your portfolio and call yourself an artist. It doesn't even matter that you don't get any experience or skill from that because you are productive. Not a single artist in human history could product that many works in that short amount of time, just as no one could watch as much anime before the invention of the fast-forward button.
"Feel Good Productivity" indeed, because doesn't it feel good to have a portfolio that keeps filling itself? To have a tool that promises to make writer's block go poof? Because that, everyone, must be the future of productivity!
Or, maybe not, because normal people (you know, those who don't wanna watch their shows on 3.5x speed but actually take time and enjoy them, or who don't speed-read novels, and who don't measure the values of their lives on how much side-hustling they can do during their lunch break at work) do not seem to view the increased workload as more productive but instead find it does quite the opposite.^22
Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it's hampering productivity and contributing to employee burnout.
Well, we can certainly see where the toxic productivity crowd sees themselves then, can't we?
Footnotes
(This is an expanded version of two of my comments [Comment A, Comment B] - go and read those if you want)
Well, Character.ai got themselves into some real deep shit recently - repeat customer Sewell Setzer shot himself and his mother, Megan Garcia, is suing the company, its founders and Google as a result, accusing them of "anthropomorphising" their chatbots and offering “psychotherapy without a license.”, among other things and demanding a full-blown recall.
Now, I'm not a lawyer, but I can see a few aspects which give Garcia a pretty solid case:
-
The site has "mental health-focused chatbots like “Therapist” and “Are You Feeling Lonely,” which Setzer interacted with" as Emma Roth noted writing for The Verge
-
Character.ai has already had multiple addiction/attachment cases like Sewell's - I found articles from Wired and news.com.au, plus a few user testimonies (Exhibit A, Exhibit B, Exhibit C) about how damn addictive the fucker is.
-
As Kevin Roose notes for NYT "many of the leading A.I. labs have resisted building A.I. companions on ethical grounds or because they consider it too great a risk". That could be used to suggest character.ai were being particularly reckless.
-
Google's researchers published a paper which warned of the potential harms chatbots can cause, noting that users can be "persuaded to take their own life" and referencing a case of exactly that for proof (Added March 18th 2025 - thanks to Maggie Harrison Dupré for finding this)
Which way the suit's gonna go, I don't know - my main interest's on the potential fallout.
Some Predictions
Win or lose, I suspect this lawsuit is going to sound character.ai's death knell - even if they don't get regulated out of existence, "our product killed a child" is the kind of Dasani-level PR disaster few companies can recover from, and news of this will likely prompt any would-be investors to run for the hills.
If Garcia does win the suit, it'd more than likely set a legal precedent which denies Section 230 protection to chatbots, if not AI-generated content in general. If that happens, I expect a wave of lawsuits against other chatbot apps like Replika, Kindroid and Nomi at the minimum.
As for the chatbots themselves, I expect they're gonna rapidly lock their shit down hard and fast, to prevent themselves from having a situation like this on their hands, and I expect their users are gonna be pissed.
As for the AI industry at large, I suspect they're gonna try and paint the whole thing as a frivolous lawsuit and Garcia as denying any fault for her son's suicide , a la the "McDonald's coffee case". How well this will do, I don't know - personally, considering the AI industry's godawful reputation with the public, I expect they're gonna have some difficulty.
(This is basically an expanded version of a comment on the weekly Stubsack - I've linked it above for convenience's sake.)
This is pure gut instinct, but I’m starting to get the feeling this AI bubble’s gonna destroy the concept of artificial intelligence as we know it.
On the artistic front, there's the general tidal wave of AI-generated slop (which I've come to term "the slop-nami") which has come to drown the Internet in zero-effort garbage, interesting only when the art's utterly insane or its prompter gets publicly humiliated, and, to quote Line Goes Up, "derivative, lazy, ugly, hollow, and boring" the other 99% of the time.
(And all while the AI industry steals artists' work, destroys their livelihoods and shamelessly mocks their victims throughout.)
On the "intelligence" front, the bubble's given us public and spectacular failures of reasoning/logic like Google gluing pizza and eating onions, ChatGPT sucking at chess and briefly losing its shit, and so much more - even in the absence of formal proof LLMs can't reason, its not hard to conclude they're far from intelligent.
All of this is, of course, happening whilst the tech industry as a whole is hyping the ever-loving FUCK out of AI, breathlessly praising its supposed creativity/intelligence/brilliance and relentlessly claiming that they're on the cusp of AGI/superintelligence/whatever-the-fuck-they're-calling-it-right-now, they just need to raise a few more billion dollars and boil a few more hundred lakes and kill a few more hundred species and enable a few more months of SEO and scams and spam and slop and soulless shameless scum-sucking shitbags senselessly shitting over everything that was good about the Internet.
The public's collective consciousness was ready for a lot of futures regarding AI - a future where it took everyone's jobs, a future where it started the apocalypse, a future where it brought about utopia, etcetera. A future where AI ruins everything by being utterly, fundamentally incompetent, like the one we're living in now?
That's a future the public was not ready for - sci-fi writers weren't playing much the idea of "incompetent AI ruins everything" (Paranoia is the only example I know of), and the tech press wasn't gonna run stories about AI's faults until it became unignorable (like that lawyer who got in trouble for taking ChatGPT at its word).
Now, of course, the public's had plenty of time to let the reality of this current AI bubble sink in, to watch as the AI industry tries and fails to fix the unfixable hallucination issue, to watch the likes of CrAIyon and Midjourney continually fail to produce anything even remotely worth the effort of typing out a prompt, to watch AI creep into and enshittify every waking aspect of their lives as their bosses and higher-ups buy the hype hook, line and fucking sinker.
All this, I feel, has built an image of AI as inherently incapable of humanlike intelligence/creativity (let alone Superintelligence^tm^), no matter how many server farms you build or oceans of water you boil.
Especially so on the creativity front - publicly rejecting AI, like what Procreate and Schoolism did, earns you an instant standing ovation, whilst openly shilling it (like PC Gamer or The Bookseller) or showcasing it (like Justine Moore, Proper Prompter or Luma Labs) gets you publicly and relentlessly lambasted. To quote Baldur Bjarnason, the “E-number additive, but for creative work” connotation of “AI” is more-or-less a permanent fixture in the public’s mind.
I don't have any pithy quote to wrap this up, but to take a shot in the dark, I expect we're gonna see a particularly long and harsh AI winter once the bubble bursts - one fueled not only by disappointment in the failures of LLMs, but widespread public outrage at the massive damage the bubble inflicted, with AI funding facing heavy scrutiny as the public comes to treat any research into the field as done with potentally malicious intent.
This started as a summary of a random essay Robert Epstein (fuck, that's an unfortunate surname) cooked up back in 2016, and evolved into a diatribe about how the AI bubble affects how we think of human cognition.
This is probably a bit outside awful's wheelhouse, but hey, this is MoreWrite.
The TL;DR
The general article concerns two major metaphors for human intelligence:
- The information processing (IP) metaphor, which views the brain as some form of computer (implicitly a classical one, though you could probably cram a quantum computer into that metaphor too)
- The anti-representational metaphor, which views the brain as a living organism, which constantly changes in response to experiences and stimuli, and which contains jack shit in the way of any computer-like components (memory, processors, algorithms, etcetera)
Epstein's general view is, if the title didn't tip you off, firmly on the anti-rep metaphor's side, dismissing IP as "not even slightly valid" and openly arguing for dumping it straight into the dustbin of history.
His main major piece of evidence for this is a basic experiment, where he has a student draw two images of dollar bills - one from memory, and one with a real dollar bill as reference - and compare the two.
Unsurprisingly, the image made with a reference blows the image from memory out of the water every time, which Epstein uses to argue against any notion of the image of a dollar bill (or anything else, for that matter) being stored in one's brain like data in a hard drive.
Instead, he argues that the student making the image had re-experienced seeing the bill when drawing it from memory, with their ability to do so having come because their brain had changed at the sight of many a dollar bill up to this point to enable them to do it.
Another piece of evidence he brings up is a 1995 paper from Science by Michael McBeath regarding baseballers catching fly balls. Where the IP metaphor reportedly suggests the player roughly calculates the ball's flight path with estimates of several variables ("the force of the impact, the angle of the trajectory, that kind of thing"), the anti-rep metaphor (given by McBeath) simply suggests the player catches them by moving in a manner which keeps the ball, home plate and the surroundings in a constant visual relationship with each other.
The final piece I could glean from this is a report in Scientific American about the Human Brain Project (HBP), a $1.3 billion project launched by the EU in 2013, made with the goal of simulating the entire human brain on a supercomputer. Said project went on to become a "brain wreck" less than two years in (and eight years before its 2023 deadline) - a "brain wreck" Epstein implicitly blames on the whole thing being guided by the IP metaphor.
Said "brain wreck" is a good place to cap this section off - the essay is something I recommend reading for yourself (even if I do feel its arguments aren't particularly strong), and its not really the main focus of this little ramblefest. Anyways, onto my personal thoughts.
Some Personal Thoughts
Personally, I suspect the AI bubble's made the public a lot less receptive to the IP metaphor these days, for a few reasons:
- Articial Idiocy
The entire bubble was sold as a path to computers with human-like, if not godlike intelligence - artificial thinkers smarter than the best human geniuses, art generators better than the best human virtuosos, et cetera. Hell, the AIs at the centre of this bubble are running on neural networks, whose functioning is based on our current understanding of how the brain works. [Missed this incomplete sensence first time around :P]
What we instead got was Google telling us to eat rocks and put glue in pizza, chatbots hallucinating everything under the fucking sun, and art generators drowning the entire fucking internet in pure unfiltered slop, identifiable in the uniquely AI-like errors it makes. And all whilst burning through truly unholy amounts of power and receiving frankly embarrassing levels of hype in the process.
(Quick sidenote: Even a local model running on some rando's GPU is a power-hog compared to what its trying to imitate - digging around online indicates your brain uses only 20 watts of power to do what it does.)
With the parade of artificial stupidity the bubble's given us, I wouldn't fault anyone for coming to believe the brain isn't like a computer at all.
- Inhuman Learning
Additionally, AI bros have repeatedly and incessantly claimed that AIs are creative and that they learn like humans, usually in response to complaints about the Biblical amounts of art stolen for AI datasets.
Said claims are, of course, flat-out bullshit - last I checked, human artists only need a few references to actually produce something good and original, whilst your average LLM will produce nothing but slop no matter how many terabytes upon terabytes of data you throw at its dataset.
This all arguably falls under the "Artificial Idiocy" heading, but it felt necessary to point out - these things lack the creativity or learning capabilities of humans, and I wouldn't blame anyone for taking that to mean that brains are uniquely unlike computers.
- Eau de Tech Asshole
Given how much public resentment the AI bubble has built towards the tech industry (which I covered in my previous post), my gut instinct's telling me that the IP metaphor is also starting to be viewed in a harsher, more "tech asshole-ish" light - not just merely a reductive/incorrect view on human cognition, but as a sign you put tech over human lives, or don't see other people as human.
Of course, AI providing a general parade of the absolute worst scumbaggery we know (with Mira Murati being an anti-artist scumbag and Sam Altman being a general creep as the biggest examples) is probably helping that fact, alongside all the active attempts by AI bros to mimic real artists (exhibit A, exhibit B).
(Gonna expand on a comment I whipped out yesterday - feel free to read it for more context)
At this point, its already well known AI bros are crawling up everyone's ass and scraping whatever shit they can find - robots.txt, honesty and basic decency be damned.
The good news is that services have started popping up to actively cockblock AI bros' digital smash-and-grabs - Cloudflare made waves when they began offering blocking services for their customers, but Spawning AI's recently put out a beta for an auto-blocking service of their own called Kudurru.
(Sidenote: Pretty clever of them to call it Kudurru.)
I do feel like active anti-scraping measures could go somewhat further, though - the obvious route in my eyes would be to try to actively feed complete garbage to scrapers instead - whether by sticking a bunch of garbage on webpages to mislead scrapers or by trying to prompt inject the shit out of the AIs themselves.
The main advantage I can see is subtlety - it'll be obvious to AI corps if their scrapers are given a 403 Forbidden and told to fuck off, but the chance of them noticing that their scrapers are getting fed complete bullshit isn't that high - especially considering AI bros aren't the brightest bulbs in the shed.
Arguably, AI art generators are already getting sabotaged this way to a strong extent - Glaze and Nightshade aside, ChatGPT et al's slop-nami has provided a lot of opportunities for AI-generated garbage (text, music, art, etcetera) to get scraped and poison AI datasets in the process.
How effective this will be against the "summarise this shit for me" chatbots which inspired this high-length shitpost I'm not 100% sure, but between one proven case of prompt injection and AI's dogshit security record, I expect effectiveness will be pretty high.
This is just a draft, best refrain from linking. (I hope we'll get this up tomorrow or Monday. edit: probably this week? edit 2: it's up!!) The [bracketed] stuff is links to cites.
Please critique!
A vision came to us in a dream — and certainly not from any nameable person — on the current state of the venture capital fueled AI and machine learning industry. We asked around and several in the field concurred.
AIs are famous for “hallucinating” made-up answers with wrong facts. The hallucinations are not decreasing. In fact, the hallucinations are getting worse.
If you know how large language models work, you will understand that all output from a LLM is a “hallucination” — it’s generated from the latent space and the training data. But if your input contains mostly facts, then the output has a better chance of not being nonsense.
Unfortunately, the VC-funded AI industry runs on the promise of replacing humans with a very large shell script. If the output is just generated nonsense, that’s a problem. There is a slight panic among AI company leadership about this.
Even more unfortunately, the AI industry has run out of untainted training data. So they’re seriously considering doing the stupidest thing possible: training AIs on the output of other AIs. This is already known to make the models collapse into gibberish. [WSJ, archive]
There is enough money floating around in tech VC to fuel this nonsense for another couple of years — there are hundreds of billions of dollars (family offices, sovereign wealth funds) desperate to find an investment. If ever there was an argument for swingeing taxation followed by massive government spending programs, this would be it.
Ed Zitron gives it three more quarters (nine months). The gossip concurs with Ed on this being likely to last for another three quarters. There should be at least one more wave of massive overhiring. [Ed Zitron]
The current workaround is to hire fresh Ph.Ds to fix the hallucinations and try to underpay them on the promise of future wealth. If you have a degree with machine learning in it, gouge them for every penny you can while the gouging is good.
AI is holding up the S&P 500. This means that when the AI VC bubble pops, tech will drop. Whenever the NASDAQ catches a cold, bitcoin catches COVID — so expect crypto to go through the floor in turn.