BlueMonday1984

joined 2 years ago
[–] BlueMonday1984@awful.systems 12 points 1 month ago (6 children)

The Framework thread caused by the company's fash turn is still going even after eight full days.

Lotta lowlights to pick from, but the guy openly praising DHH for driving Basecamp straight off a cliff is particularly sneer-worthy:

[–] BlueMonday1984@awful.systems 8 points 1 month ago (1 children)

They posted this recent article written by Peter Coffin

Oh, hey, that's the "Plagiarism is AWESOME, And Here's Why" guy, who tut-tutted HBomberguy for erasing plagiarist shithead James Somerton from existence and went to bat for JK Rowling okay yeah dump this guy's shit in the fucking bin

I was pretty strongly anti-copyright back when I was younger, but after seeing the plague of art theft and grave robbing the NFT fad brought (documented heavily by @NFTTheft on Twitter), and especially after the AI bubble triggered an onslaught of art theft, cultural vandalism and open hostility to artists, I have come around to strongly supporting it.

I may have some serious complaints about the current state of copyright (basically everyone has), but its clear that copyright is absolutely necessary to protect artists (rich and poor) from those who exploit the labour of others.

[–] BlueMonday1984@awful.systems 4 points 1 month ago

Guy's been doing well for himself since the Escapist imploded in 2023 - he's doing video reviews and video essays over on Second Wind, under the names Fully Ramblomatic and Semi Ramblomatic, respectively.

(As for the Escapist, it got sold off to a "private investor" and turned into a gambling content mill in 2025)

[–] BlueMonday1984@awful.systems 9 points 1 month ago

New Baldur Bjarnason: The inevitability of anger, on the impending reckoning for AI and tech influencers' attempts to avoid it, plus how social media shapes public discourse.

[–] BlueMonday1984@awful.systems 3 points 1 month ago

I know full well you're being sarcastic, but my answer is an emphatic "NO". I feel like I'm gonna need a lobotomy to get this hypothetical out my head now.

[–] BlueMonday1984@awful.systems 4 points 1 month ago (4 children)

New Ed Zitron, giving exact numbers for how much money Cursor and Anthropic have lit on fire and continuing to shed light on the AI industry's ability to incinerate revenue.

[–] BlueMonday1984@awful.systems 7 points 1 month ago (4 children)

Should an AI copy of you help decide if you live or die?

To lightly paraphrase Yahtzee Croshaw:

Short answer: No. Long answer: No, and go fuck yourselves, you ignorant hype-mongering cockbags.

This is the second time this rancid idea has been put forward, and its just as morally bankrupt as the first.

[–] BlueMonday1984@awful.systems 3 points 1 month ago* (last edited 1 month ago)

How do you even get to the point that you think that’s something you want to advertise?

Man's spent several years and shitloads of cash destroying his public image (and probably his brain) via slop bots, I suspect he's getting desperate to prove his LLM booster turn wasn't a career-ruining blunder

(He's also probably lost the ability to tell good work from bad work as well - that's a universal quality among slop advocates, as Gerard has pointed out on multiple occasions)

[–] BlueMonday1984@awful.systems 2 points 1 month ago* (last edited 1 month ago) (1 children)

I don’t think there’s literally any non-shitty tech left with Framework turning fash.

Doing some digging, it seems GNOME's still non-shitty - they've reportedly refused sponsorship money from Framework, to the whining of multiple people online (post is in Russian).

Doesn't change the fact that Framework's dealt a big blow to right-for-repair by doing this, but its something.

EDIT: Just gonna add in something I gotta get off my chest:

Even from a "ruthless capitalist" perspective, Framework's fash turn is pretty baffling to me. They positioned themselves as beacons of right-to-repair, as good guys in tech trying to "fix consumer electronics, one category at a time" - their shit was overtly political from the fucking start. People weren't buying them to get the fastest laptops, or to get the best value for money, they bought them because they believed in their stated mission. Anyone with business sense would've known shilling a fascist's personal Linux "distro" would've presented a severe risk to Framework's brand.

Exactly how Nirav got blindsided by this shit, I genuinely don't understand. Considering his response to the backlash involved "aPoLiTiCaL" "bIg TeNt" blather and publicly farming compassion from Twitter fash, its probably because he's an outright fascist himself and assumed everyone else around him shared his utterly rancid views.

[–] BlueMonday1984@awful.systems 2 points 1 month ago

"Don't rely on random oracles and spirits when running a military campaign, you fool, you moron." - Sun Tzu, The Art of War (paraphrased)

[–] BlueMonday1984@awful.systems 4 points 1 month ago

Words of wisdom from Baldur Bjarnason (mostly repeated from his Basecamp post-mortem):

We know we’re reaching the late stages of a bubble when we start to see multiple “people in tech don’t really believe in all of this, honest, we just act like it because we think we have to, we’re a silent majority you see”, but the truth is that what you believe in private doesn’t matter. All that matter is that you’ve been acting like a true believer and you are what you do

In work and politics, it genuinely doesn’t matter what you were thinking when you actively aided and abetted in shitting on people’s work, built systems that helped fascists, ruined the education system and pretty much all of media. What matters, and what you should be judged on is what you did

Considering a recent example where AI called someone a terrorist for opposing genocide, its something that definitely bears repeating.

[–] BlueMonday1984@awful.systems 6 points 1 month ago (1 children)

If I had to pick a particular tidbit, I'd go with the indifference/disrespect Bone Thugs-n-Harmony got in their surprise set there - in any other context, that shit would have had people going fucking wild.

As Patrick himself notes, its a perfect visual metaphor for the bitcoin bros' relation to art and culture, or more accurately the utter black hole of vapidity and soullessness that defines the community's tastes.

 

(This is basically an expanded version of a comment on the weekly Stubsack - I've linked it above for convenience's sake.)

This is pure gut instinct, but I’m starting to get the feeling this AI bubble’s gonna destroy the concept of artificial intelligence as we know it.

On the artistic front, there's the general tidal wave of AI-generated slop (which I've come to term "the slop-nami") which has come to drown the Internet in zero-effort garbage, interesting only when the art's utterly insane or its prompter gets publicly humiliated, and, to quote Line Goes Up, "derivative, lazy, ugly, hollow, and boring" the other 99% of the time.

(And all while the AI industry steals artists' work, destroys their livelihoods and shamelessly mocks their victims throughout.)

On the "intelligence" front, the bubble's given us public and spectacular failures of reasoning/logic like Google gluing pizza and eating onions, ChatGPT sucking at chess and briefly losing its shit, and so much more - even in the absence of formal proof LLMs can't reason, its not hard to conclude they're far from intelligent.

All of this is, of course, happening whilst the tech industry as a whole is hyping the ever-loving FUCK out of AI, breathlessly praising its supposed creativity/intelligence/brilliance and relentlessly claiming that they're on the cusp of AGI/superintelligence/whatever-the-fuck-they're-calling-it-right-now, they just need to raise a few more billion dollars and boil a few more hundred lakes and kill a few more hundred species and enable a few more months of SEO and scams and spam and slop and soulless shameless scum-sucking shitbags senselessly shitting over everything that was good about the Internet.


The public's collective consciousness was ready for a lot of futures regarding AI - a future where it took everyone's jobs, a future where it started the apocalypse, a future where it brought about utopia, etcetera. A future where AI ruins everything by being utterly, fundamentally incompetent, like the one we're living in now?

That's a future the public was not ready for - sci-fi writers weren't playing much the idea of "incompetent AI ruins everything" (Paranoia is the only example I know of), and the tech press wasn't gonna run stories about AI's faults until it became unignorable (like that lawyer who got in trouble for taking ChatGPT at its word).

Now, of course, the public's had plenty of time to let the reality of this current AI bubble sink in, to watch as the AI industry tries and fails to fix the unfixable hallucination issue, to watch the likes of CrAIyon and Midjourney continually fail to produce anything even remotely worth the effort of typing out a prompt, to watch AI creep into and enshittify every waking aspect of their lives as their bosses and higher-ups buy the hype hook, line and fucking sinker.


All this, I feel, has built an image of AI as inherently incapable of humanlike intelligence/creativity (let alone Superintelligence^tm^), no matter how many server farms you build or oceans of water you boil.

Especially so on the creativity front - publicly rejecting AI, like what Procreate and Schoolism did, earns you an instant standing ovation, whilst openly shilling it (like PC Gamer or The Bookseller) or showcasing it (like Justine Moore, Proper Prompter or Luma Labs) gets you publicly and relentlessly lambasted. To quote Baldur Bjarnason, the “E-number additive, but for creative work” connotation of “AI” is more-or-less a permanent fixture in the public’s mind.

I don't have any pithy quote to wrap this up, but to take a shot in the dark, I expect we're gonna see a particularly long and harsh AI winter once the bubble bursts - one fueled not only by disappointment in the failures of LLMs, but widespread public outrage at the massive damage the bubble inflicted, with AI funding facing heavy scrutiny as the public comes to treat any research into the field as done with potentally malicious intent.

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this)

 

None of what I write in this newsletter is about sowing doubt or "hating," but a sober evaluation of where we are today and where we may end up on the current path. I believe that the artificial intelligence boom — which would be better described as a generative AI boom — is (as I've said before) unsustainable, and will ultimately collapse. I also fear that said collapse could be ruinous to big tech, deeply damaging to the startup ecosystem, and will further sour public support for the tech industry.

Can't blame Zitron for being pretty downbeat in this - given the AI bubble's size and side-effects, its easy to see how its bursting can have some cataclysmic effects.

(Shameless self-promo: I ended up writing a bit about the potential aftermath as well)

 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this)

 

This started as a summary of a random essay Robert Epstein (fuck, that's an unfortunate surname) cooked up back in 2016, and evolved into a diatribe about how the AI bubble affects how we think of human cognition.

This is probably a bit outside awful's wheelhouse, but hey, this is MoreWrite.

The TL;DR

The general article concerns two major metaphors for human intelligence:

  • The information processing (IP) metaphor, which views the brain as some form of computer (implicitly a classical one, though you could probably cram a quantum computer into that metaphor too)
  • The anti-representational metaphor, which views the brain as a living organism, which constantly changes in response to experiences and stimuli, and which contains jack shit in the way of any computer-like components (memory, processors, algorithms, etcetera)

Epstein's general view is, if the title didn't tip you off, firmly on the anti-rep metaphor's side, dismissing IP as "not even slightly valid" and openly arguing for dumping it straight into the dustbin of history.

His main major piece of evidence for this is a basic experiment, where he has a student draw two images of dollar bills - one from memory, and one with a real dollar bill as reference - and compare the two.

Unsurprisingly, the image made with a reference blows the image from memory out of the water every time, which Epstein uses to argue against any notion of the image of a dollar bill (or anything else, for that matter) being stored in one's brain like data in a hard drive.

Instead, he argues that the student making the image had re-experienced seeing the bill when drawing it from memory, with their ability to do so having come because their brain had changed at the sight of many a dollar bill up to this point to enable them to do it.

Another piece of evidence he brings up is a 1995 paper from Science by Michael McBeath regarding baseballers catching fly balls. Where the IP metaphor reportedly suggests the player roughly calculates the ball's flight path with estimates of several variables ("the force of the impact, the angle of the trajectory, that kind of thing"), the anti-rep metaphor (given by McBeath) simply suggests the player catches them by moving in a manner which keeps the ball, home plate and the surroundings in a constant visual relationship with each other.

The final piece I could glean from this is a report in Scientific American about the Human Brain Project (HBP), a $1.3 billion project launched by the EU in 2013, made with the goal of simulating the entire human brain on a supercomputer. Said project went on to become a "brain wreck" less than two years in (and eight years before its 2023 deadline) - a "brain wreck" Epstein implicitly blames on the whole thing being guided by the IP metaphor.

Said "brain wreck" is a good place to cap this section off - the essay is something I recommend reading for yourself (even if I do feel its arguments aren't particularly strong), and its not really the main focus of this little ramblefest. Anyways, onto my personal thoughts.

Some Personal Thoughts

Personally, I suspect the AI bubble's made the public a lot less receptive to the IP metaphor these days, for a few reasons:

  1. Articial Idiocy

The entire bubble was sold as a path to computers with human-like, if not godlike intelligence - artificial thinkers smarter than the best human geniuses, art generators better than the best human virtuosos, et cetera. Hell, the AIs at the centre of this bubble are running on neural networks, whose functioning is based on our current understanding of how the brain works. [Missed this incomplete sensence first time around :P]

What we instead got was Google telling us to eat rocks and put glue in pizza, chatbots hallucinating everything under the fucking sun, and art generators drowning the entire fucking internet in pure unfiltered slop, identifiable in the uniquely AI-like errors it makes. And all whilst burning through truly unholy amounts of power and receiving frankly embarrassing levels of hype in the process.

(Quick sidenote: Even a local model running on some rando's GPU is a power-hog compared to what its trying to imitate - digging around online indicates your brain uses only 20 watts of power to do what it does.)

With the parade of artificial stupidity the bubble's given us, I wouldn't fault anyone for coming to believe the brain isn't like a computer at all.

  1. Inhuman Learning

Additionally, AI bros have repeatedly and incessantly claimed that AIs are creative and that they learn like humans, usually in response to complaints about the Biblical amounts of art stolen for AI datasets.

Said claims are, of course, flat-out bullshit - last I checked, human artists only need a few references to actually produce something good and original, whilst your average LLM will produce nothing but slop no matter how many terabytes upon terabytes of data you throw at its dataset.

This all arguably falls under the "Artificial Idiocy" heading, but it felt necessary to point out - these things lack the creativity or learning capabilities of humans, and I wouldn't blame anyone for taking that to mean that brains are uniquely unlike computers.

  1. Eau de Tech Asshole

Given how much public resentment the AI bubble has built towards the tech industry (which I covered in my previous post), my gut instinct's telling me that the IP metaphor is also starting to be viewed in a harsher, more "tech asshole-ish" light - not just merely a reductive/incorrect view on human cognition, but as a sign you put tech over human lives, or don't see other people as human.

Of course, AI providing a general parade of the absolute worst scumbaggery we know (with Mira Murati being an anti-artist scumbag and Sam Altman being a general creep as the biggest examples) is probably helping that fact, alongside all the active attempts by AI bros to mimic real artists (exhibit A, exhibit B).

 

(Gonna expand on a comment I whipped out yesterday - feel free to read it for more context)


At this point, its already well known AI bros are crawling up everyone's ass and scraping whatever shit they can find - robots.txt, honesty and basic decency be damned.

The good news is that services have started popping up to actively cockblock AI bros' digital smash-and-grabs - Cloudflare made waves when they began offering blocking services for their customers, but Spawning AI's recently put out a beta for an auto-blocking service of their own called Kudurru.

(Sidenote: Pretty clever of them to call it Kudurru.)

I do feel like active anti-scraping measures could go somewhat further, though - the obvious route in my eyes would be to try to actively feed complete garbage to scrapers instead - whether by sticking a bunch of garbage on webpages to mislead scrapers or by trying to prompt inject the shit out of the AIs themselves.

The main advantage I can see is subtlety - it'll be obvious to AI corps if their scrapers are given a 403 Forbidden and told to fuck off, but the chance of them noticing that their scrapers are getting fed complete bullshit isn't that high - especially considering AI bros aren't the brightest bulbs in the shed.

Arguably, AI art generators are already getting sabotaged this way to a strong extent - Glaze and Nightshade aside, ChatGPT et al's slop-nami has provided a lot of opportunities for AI-generated garbage (text, music, art, etcetera) to get scraped and poison AI datasets in the process.

How effective this will be against the "summarise this shit for me" chatbots which inspired this high-length shitpost I'm not 100% sure, but between one proven case of prompt injection and AI's dogshit security record, I expect effectiveness will be pretty high.

 

I don’t think I’ve ever experienced before this big of a sentiment gap between tech – web tech especially – and the public sentiment I hear from the people I know and the media I experience.

Most of the time I hear “AI” mentioned on Icelandic mainstream media or from people I know outside of tech, it’s being used as to describe something as a specific kind of bad. “It’s very AI-like” (“mjög gervigreindarlegt” in Icelandic) has become the talk radio short hand for uninventive, clichéd, and formulaic.

babe wake up the butlerian jihad is coming

view more: ‹ prev next ›