BlueMonday1984

joined 2 years ago

"Techbro syndrome" would be a perfect name for it, honestly.

The Audio Mods are doing God's work keeping the portal slop-free. Its good to know there's at least one place where human-made work is still valued.

[–] BlueMonday1984@awful.systems 9 points 6 days ago (1 children)

Call of Duty: Black Ops 7 has caused backlash for Activision after AI slop Calling Cards were discovered in-game.

Beyond causing large-scale backlash on Reddit, Twitter and basically everywhere else, its also gotten called out by US Congressman Ro Khanna (also on Twitter).

[–] BlueMonday1984@awful.systems 3 points 2 weeks ago

I also learned Bitwarden bought into AI reading this. They don't appear to have let vulnerability extruders ruin their code as of this writing, but any willingness to entertain the fascism machines is enough for me to consider jumping ship.

[–] BlueMonday1984@awful.systems 1 points 2 weeks ago (4 children)

Its also one of the authors of "Attention Is All You Need", one of the founding texts of the AI ideology.

[–] BlueMonday1984@awful.systems 6 points 2 weeks ago

I’m gonna say it: The entire “artificial intelligence”/“machine learning” research field is corrupt. They have institutionally accepted the bullshit fountain as a tool. It doesn’t matter if they’re only using chatbots as a “pilot program”; they’ve bought into the ideology. They’ve granted fashtech a seat at the bar and forced all the other customers to shake its hand.

Abso-fucking-lutely. Oxford's latest "research paper" isn't marketing - its propaganda. Propaganda for bullshit fountains, and for the ideology which endorses them.

[–] BlueMonday1984@awful.systems 6 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Another deep-dive into DHH's decline has popped up online: DHH and Omarchy: Midlife crisis:

[–] BlueMonday1984@awful.systems 3 points 2 weeks ago

What’s a government backstop, and does it happen often? It sounds like they’re asking for a preemptive bail-out.

Zitron's stated multiple times a bailout isn't coming, but I'm not ruling it out myself - AI has proven highly useful as a propaganda tool and an accountability sink, the oligarchs in office have good reason to keep it alive.

[–] BlueMonday1984@awful.systems 9 points 2 weeks ago

I feel slightly better about my Pepsi addiction now.

The Coca-Cola Company is desperately trying to talk up this mediocre demo as the best demo ever. That’s how AI works now — AI companies don’t give you an impressive demo that can’t be turned into a product, they give you a garbage demo and loudly insist it’s actually super cool

Considering AI supporters' are too artistically blind to tell quality work from slop, I'm gonna chalk that up to them genuinely believing its the best thing since sliced bread.

Times are tough, the real economy where people live is way down, the recession is biting, and the normal folk know the ones promoting AI want them out of a job. If you push AI, you are the enemy of ordinary people. And the ordinary people know it.

Damn right, David. Here's to hoping the ordinary people don't forget who the AI pushers were once winter sets in.

[–] BlueMonday1984@awful.systems 2 points 2 weeks ago

i think you need to be a little bit more specific unless sounding a little like an unhinged cleric from memritv is what you’re going for

I'll admit to taking your previous comment too literally here - I tend to assume people are completely serious unless I can clearly tell otherwise.

but yeah nah i don’t think it’s gonna last this way, people want to go back to just doing their jobs like it used to be, and i think it may be that bubble burst wipes out companies that subsidized and provided cheap genai, so that promptfondlers hammering image generators won’t be as much of a problem. propaganda use and scams will remain i guess

Scams and propaganda will absolutely remain a problem going forward - LLMs are tailor-made to flood the zone with shit (good news for propagandists), and AI tools will provide scammers with plenty of useful tools for deception.

[–] BlueMonday1984@awful.systems -1 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Considering we've already got a burgeoning Luddite movement that's been kicked into high gear by the AI bubble, I'd personally like to see an outgrowth of that movement be what ultimately kicks it off.

There were already some signs of this back in August, when anti-AI protesters vandalised cars and left "Butlerian Jihad" leaflets outside a pro-AI business meetup in Portland.

Alternatively, I can see the Jihad kicking off as part of an environmentalist movement - to directly quote Baldur Bjarnason:

[AI has] turned the tech industry from a potential political ally to environmentalism to an outright adversary. Water consumption of individual queries is irrelevant because now companies like Google and Microsoft are explicitly lined up against the fight against climate disaster. For that alone the tech should be burned to the ground.

I wouldn't rule out an artist-led movement being how the Jihad starts, either - between the AI industry "directly promising to destroy their industry, their work, and their communities" (to quote Baldur again), and the open and unrelenting contempt AI boosters have shown for art and artists, artists in general have plenty of reason to see AI as an existential threat to their craft and/or a show of hatred for who they are.

[–] BlueMonday1984@awful.systems 5 points 2 weeks ago (5 children)

Part of me wants to see Google actually try this and get publicly humiliated by their nonexistent understanding of physics, part of me dreads the fact it'll dump even more fucking junk into space.

 

(This is an expanded version of a comment I made, which I've linked above.)

Well, seems the tech industry’s prepared to pivot to quantum if and when AI finally dies and goes away forever. If and when the hucksters get around to inflating the quantum bubble, I expect they’re gonna find themselves facing some degree of public resistance - probably not to the extent of what AI received, but still enough to give the hucksters some trouble.

The Encryption Issue

One of quantum’s big selling points is its purported ability to break the current encryption algorithms in use today - for a couple examples, Shor’s algorithm can reportedly double-tap public key cryptography schemes such as RSA, and Grover’s algorithm promises to supercharge brute-force attacks on symmetric-key cryptography.

Given this, I fully expect its supposed encryption-breaking abilities to stoke outcry and resistance from privacy rights groups. Even as a hypothetical, the possibility of such power falling into government hands is one that all-but guarantees Nineteen Eighty-Four levels of mass surveillance and invasion of privacy if it comes to pass.

Additionally, I expect post-quantum encryption will earn a lot of attention during the bubble as well, to pre-emptively undermine such attempts at mass surveillance.

Environmental Concerns

Much like with AI, info on how much power quantum computing requires is pretty scarce (though that’s because they more-or-less don’t exist, not because AI corps are actively hiding/juicing the numbers).

The only concrete number I could find came from IEEE Spectrum, which puts the power consumption of the D-Wave 2X (from 2015) at “slightly less than 25 kilowatts”, with practically all the power going to the refrigeration unit keeping it within a hair’s breadth of absolute zero, and the processor itself using “a tiny fraction of a microwatt”.

Given the minimal amount of info, and the AI bubble still being fresh in the public’s mind, I expect quantum systems will face resistance from environmental groups. Between the obscene power/water consumption of AI datacentres, the shitload of pollution said datacentres cause in places like Memphis, and the industry’s attempts to increase said consumption whenever possible, any notion that tech cares about the environment is dead in the (polluted) water, and attempts to sell the tech as energy efficient/environmentally friendly will likely fall on deaf ears.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

It’s been a couple of weeks since my last set of predictions on the AI winter. I’ve found myself making a couple more.

Mental Health Crises

With four known suicides (Adam Raine, Sewell Setzer, Sophie Rottenberg and an unnamed Belgian man), a recent murder-suicide, and involuntary commitments caused by AI psychosis, there’s solid evidence to show that using AI is a fast track to psychological ruin.

On top of that, AI usage is deeply addictive, combining a psychic’s con with a gambling addiction to produce what amounts to digital cocaine, leaving its users hopelessly addicted to it, if not utterly dependent on it to function (such cases often being referred to as “sloppers”).

If and when the chatbots they rely on are shut down, I expect a major outbreak of mental health crises among sloppers and true believers, as they find themselves unable to handle day-to-day life without a personal sycophant/”assistant”/”””therapist””” on hand at all times. For psychiatrists/therapists, I expect they will find a steady supply of new clients during the winter, as the death of the chatbot sends addicted promptfondlers spiralling.

Skills Gaps Galore

One of the more common claims from promptfondlers and boosters when confronted is “you won’t be replaced by AI, but by a human using AI”.

With how AI prevents juniors from developing their skills, makes seniors worse at their jobs, damages productivity whilst creating a mirage of it, and damages their users’ critical thinking and mental acuity, all signs point to the exact opposite being the case - those who embrace and use AI will be left behind, their skills rotting away as their AI-rejecting peers remain as skilled as before the bubble, if not more so thanks to spending time and energy on actually useful skills, rather than shit like “prompt engineering” or “vibe coding”.

Once the winter sets in and the chatbots disappear, the gulf between these two groups is going to become much wider, as promptfondlers’ crutches are forcibly taken away from them and their “skills” in using the de-skilling machine are rendered useless. As a consequence, I expect promptfondlers will be fired en masse and struggle to find work during the winter, as their inability to work without a money-burning chatbot turns them into a drag on a company’s bottom line.

 

Recently, I read a short article from Iris Meredith about rethinking how we teach programming. It's a pretty solid piece of work all around, and it has got me thinking how to further build on her ideas.

This contains a quick overview of her newsletter to get you up to speed, but I recommend reading it for yourself.

The Problem

As is rather obvious to most of us, the software industry is in a dire spot - Meredith summed it up better than I can:

Software engineers tend to be detached, demotivated and unwilling to care much about the work they're doing beyond their paycheck. Code quality is poor on the whole, made worse by the current spate of vibe coding and whatever other febrile ideas come out of Sam Altman's brain. Much of the software that we write is either useless or actively hurts people. And the talented, creative people that we most need in the industry are pushed to the margins of it.

As for the cause, Iris points to the "teach the mystic incantations" style used in many programming courses, which ignores teaching students how to see through an engineer’s eyes (so to speak), and teaching them the ethics of care necessary to write good code (roughly 90% of what goes into software engineering). As Iris notes:

This tends to lead, as you might expect, to a lot of new engineers being confused, demotivated and struggling to write good code or work effectively in a software environment. [...] It also means, in the end, that a lot of people who'd be brilliant software engineers just bounce off the field completely, and that a lot of people who find no joy in anything and just want a big salary wind up in the field, never realising that they have no liking or aptitude for it.

Meredith’s Idea

Meredith’s solution, in brief, is threefold.

First, she recommends starting people off with HTML as their first language, giving students the tools they need to make something they want and care about (a personal website in this case), and providing a solid bedrock for learning fundamental programming skills

Second, she recommends using “static site generators with templating engines” as an intermediate step between HTML/CSS and full-blown programming, to provide students an intuitive method of understanding basic concepts such as loops, conditionals, data structures and variables.

(As another awful member points out, they provide an easy introduction to performance considerations/profiling by being blazing fast compared to all-too common JS monoliths online, and provide a good starting point for introducing modularity as well.)

Third, and finally, she recommends having students publish their work online right from the start, to give them reason to care about their work as early as possible and give them the earliest possible opportunity to learn about the software development life cycle.

A Complementary Idea

Meredith’s suggested approach to software education is pretty solid on all fronts - it gets students invested in their programming work, and gives them the tools needed to make and maintain high-quality code.

If I were to expand on this a bit, I think the obvious addition would be to provide an arts education to complement Iris’ proposed webdev-based approach

As explicit means of self-expression, the arts provide provide great assistance in highlighting the expressive elements of software Meredith wishes to highlight

An arts education would wonderfully complement the expressive elements of software Meredith wishes to highlight - focusing on webdev, developing students’ art skills would expand their ability to customise their websites to their liking, letting them make something truly unique to themselves.

The skills that students learn through the arts would also complement what they directly learn in programming, too. The critical eye that art critique grants them will come in handy for code review. The creative muscles they build through art will enhance their problem-solving abilities, and so on.

Beyond that, I expect the complementary arts will do a good job attracting creatives to the field, whilst pushing away “people who find no joy in anything and just want a big salary”, which Meredith notes are common in the field. Historically, “learn to code” types have viewed the arts as a “useless” degree, so they’ll near-certainly turn their noses up at having to learn it alongside something more “useful”, leaving the door open for more creatives to join up.

A More Outlandish Idea

For a more outlandish idea, the long-defunct, yet well-beloved multimedia platform Adobe Flash could provide surprisingly useful for a programming education, especially with the complementary arts education I suggested before.

Being effectively an IDE and an animation program combined into one, Flash offers a means of developing and testing a student’s skills in art and programming simultaneously, and provides an easy showcase of how the two can complement each other.

Deploying Flash to a personal website wouldn’t be hard for students either, as the Ruffle emulator allows Flash content to play without having to install Flash player. (Rather helpful, given most platforms don’t accept Flash content these days :P)

 

Another excellent piece from Iris Meredith - strongly recommend reading if you want an idea of how to un-fuck software as a field.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

Well, it seems the AI bubble’s nearing its end - the Financial Times has reported a recent dive in tech stocks, the mass media has fully soured on AI, and there’s murmurs that the hucksters are pivoting to quantum.

By my guess, this quantum bubble is going to fail to get off the ground - as I see it, the AI bubble has heavily crippled the tech industry’s ability to create or sustain new bubbles, for two main reasons.

No Social License

For the 2000s and much of the 2010s, tech enjoyed a robust social license to operate - even if they weren’t loved per se (e.g. Apple), they were still pretty widely accepted throughout society, and resistance to them was pretty much nonexistent.

Whilst it was starting to fall apart with the “techlash” of the 2020s, the AI bubble has taken what social license tech has had left and put it through the shredder.

Environmental catastrophe, art theft and plagiarism, destruction of livelihoods and corporate abuse, misinformation and enabling fascism, all of this (and so much more) has eviscerated acceptance of the tech industry as it currently stands, inspiring widespread resistance and revulsion against AI, and the tech industry at large.

For the quantum bubble, I expect it will face similar resistance/mockery right out of the gate, with the wider public refusing to entertain whatever spurious claims the hucksters make, and fighting any attempts by the hucksters to force quantum into their lives.

(For a more specific prediction, quantum’s alleged encryption-breaking abilities will likely inspire backlash, being taken as evidence the hucksters are fighting against Internet privacy.)

No Hypergrowth Markets

As Baldur Bjarnason has noted about tech industry valuations:

“Over the past few decades, tech companies have been priced based on their unprecedented massive year-on-year growth that has kept relatively steady through crises and bubble pops. As the thinking goes, if you have two companies—one tech, one not—with the same earnings, the tech company should have a higher value because its earnings are likely to grow faster than the not-tech company. In a regular year, the growth has been much faster.”

For a while, this has held - even as the hypergrowth markets dried up and tech rapidly enshittified near the end of the ‘10s, the gravy train has managed to keep rolling for tech.

That gravy train is set to slam right into a brick wall, however - between the obscenely high costs of both building and running LLMs (both upfront and ongoing), and the virtually nonexistent revenues those LLMs have provided (except for NVidia, who has made a killing in the shovel selling business), the AI bubble has burned billions upon billions of dollars on a product which is practically incapable of making a profit, and heavily embrittled the entire economy in the process.

Once the bubble finally bursts, it’ll gut the wider economy and much of the tech industry, savaging evaluations across the board and killing off tech’s hypergrowth story in the process.

For the quantum bubble, this will significantly complicate attempts to raise investor/venture capital, as the finance industry comes to view tech not as an easy and endless source of growth, but as either a mature, stable industry which won’t provide the runaway returns they’re looking for, or as an absolute money pit of an industry, one trapped deep in a malaise era and capable only of wiping out whatever money you put into it.

(As a quick addendum, it's my 25th birthday tomorrow - I finished this over the course of four hours and planned to release it tomorrow, but decided to post it tonight.)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

 

(This is a mega-expanded version of a stubsack comment: https://awful.systems/comment/8327535)

Multiple times before on awful.systems, I’ve claimed the AI bubble would provide the humanities some degree of begrudging respect, at the expense of STEM’s public image taking a nosedive.

In the process of writing this mini-essay, its become clear that I was predicting humanities would cannibalise tech’s public image, rather than STEM’s - I had just failed to recognise tech had made itself utterly synonymous with STEM up until now.

Still, I’ve made this claim, I might as well try to back it up.

High Paying, No More?

One of the major things propping up tech/STEM's public image is the notion that its higher-paying than a humanities degree - that “learning to code” will earn you a high-paying job and financial stability, whilst taking any kind of “useless” arts degree will end with you working some form of low-wage employment (e.g. as a barista).

Between the complete clusterfuck that is the job market, the Trump administration’s war on American science, the use of AI to kill jobs left and right (whilst enshittifying what remains) and the ongoing layoffs ravaging the entire tech industry, the idea that any degree will earn you a stable job has been pretty thoroughly undermined.

And with coding getting the brunt of all of this, thanks to an oversaturated market and the AI bubble hitting tech particularly hard, any notion of tech being an easy road to riches is pretty much dead and buried.

Not Lookin’ So Smart

Another thing propping up tech/STEM’s image was the view of it being more “logical/rational” than the humanities - that it dealt with “objective” matters, compared to the highly-subjective humanities, that it was “apolitical” compared to the deeply-political humanities, that kinda stuff.

On that front, the AI bubble has become tech’s equivalent to the Sokal hoax, deeply undermining any and all notions of rationality tech had built up over the past few decades.

Artistically-speaking, the large-scale art theft committed to create gen-AI, the vapidity and soullessness of the AI slop it produces, the AI bros’ failure to recognise this soullessness (Fig. 1, Fig. 2) and their actions regarding the effects of gen-AI (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) have deeply undermined tech’s ability to talk on matters of art, with the industry at large viewed as incapable of understanding art at best, and as being hostile to art and artists at worst.

On a more general front, AI’s failures of reasoning (formal and informal, comedic and horrific), plus the tech industry’s refusal to recognise or acknowledge these failures (instead relentlessly hyping up AI’s supposed capabilities, making spurious claims about Incoming Superintelligence™ and doomsaying about how spicy autocomplete might kill us all), have put tech’s “rationality” into serious question, painting the industry at large as out-of-touch with reality and unconcerned with solving actual problems.

For the humanities generally, this bubble is going to make them look relatively grounded and reasonable by comparison, whilst for the arts specifically, they’ll likely be able to point to the slop-nami when their usefulness is questioned.

(Reports of AI usage causing metaphorical and literal brainrot likely aren’t helping, either, as they provide the public an obvious explanation for tech’s disconnection from reality.)

Eau de Fash

Tech has long had to deal with a long-standing “debate bro both sides free speech libertarianism” stench on it, as Soyweiser has noted, but between Silicon Valley’s willing collaboration with the Trump administration, plus fascists’ adoration of AI and AI slop, that stench has evolved into an unignorable smell of Eau de Fash covering the entire industry

As a consequence of this, I expect tech at large will be viewed as a Nazi bar writ large, with tech workers as a group being either willing accomplices to fascism if not outright fascist themselves. As for tech degrees, I expect they’ll be viewed as leaving their holders unequipped to resist fascism, if not outright vulnerable to fascist rhetoric.

Predicting the Job Market

(Disclaimer: This is not financial advice, this is just a shot in the dark from some dipshit with a laptop. I take no credit for whatever financial success my readers earn.)

With tech’s public cachet and “high-paying” reputation going out the window, plus the job market for tech collapsing, I expect a major drop-off in students taking up tech-related degrees, with a smaller drop-off for STEM degrees in general. By my guess, we aren’t gonna see another “learn to code” push for at least a decade. If and when another push starts, it’ll probably take on a completely different form than what we’ve seen before.

Exactly which professions will benefit from the tech crash, I don’t know - I’m not a Superpredictor™, I’m just some dipshit with a laptop. By my guess, professions which can exploit the fallout of AI to their benefit will have the best shot of becoming the next “lucrative cash cows”, so to speak.

For therapists/psychiatrists, the rise of AI psychosis and related mental health crises will likely give them a steady source of clients for the foreseeable future - whether that be because new clients have realised chatbot usage is ruining them, or because people are being involuntarily committed after losing touch with reality.

For those in writing related jobs, they may find lucrative work cleaning up attempts to sidestep them with AI slop, squeezing hefty premiums from desperate clients who find themselves lacking leverage over them.

For programmers (most likely senior programmers, juniors are still likely screwed), the rise of “vibe coding” has created mountains of technical debt and unmaintainable code that will need to be torn down - for those who manage to find themselves a job, they’ll probably make good money tearing those mountains down. For cybercriminals, the aforementioned “vibe coding”, plus the inherently insecure nature of chatbots/agents, will likely give them a lot of low-hanging fruit to go after.

As for degrees, those which can fill skills gaps the bubble has created/widened should benefit the most.

English/Creative Writing looks like an obvious winner - ChatGPT has fried a lot of people’s writing skills, so holding one of those degrees (ideally with a writing portfolio) can help convince an employer you don’t need spicy autocomplete to write for you.

Psychology/psychiatry will likely benefit quite a bit as well - both of those can directly assist in landing you a job as a therapist, which I’ve predicted will become much more lucrative in the coming years.

EDIT: Slightly expanded my prediction about programmers.

view more: ‹ prev next ›