this post was submitted on 03 Nov 2025
23 points (100.0% liked)

TechTakes

2296 readers
44 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] o7___o7@awful.systems 24 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

Mozilla destroys the 20-year-old volunteer community that handled Japanese localization and replaces it with a chatbot. It compounds this by deleting years of work with zero warning. Adding insult to insult, Mozilla then rolls a critical failure on "reading the room."

Would you be interested to hop on a call with us to talk about this further?

https://support.mozilla.org/en-US/forums/contributors/717446

[–] smiletolerantly@awful.systems 16 points 2 weeks ago

Oh what the fuck why can Mozilla not just STOP. Just... STOP. Honestly sick of this shit.

load more comments (3 replies)
[–] a_certain_individual@lemmy.world 22 points 2 weeks ago (1 children)

Boss at new job just told me we’re going all-in on AI and I need to take a core role in the project

They want to give LLMs access to our wildly insecure mass of SQL servers filled with numeric data

Security a non factor

😂🔫

[–] CinnasVerses@awful.systems 16 points 2 weeks ago (1 children)

Sounds like the thing to do is to say yes boss, get Baldur Bjarnason's book on business risks and talk to legal, then discover some concerns that just need the boss' sign-off in writing.

load more comments (1 replies)
[–] rook@awful.systems 18 points 1 week ago (9 children)

KeepassXC (my password manager of choice) are “experimenting” with ai code assistants 🫩

https://www.reddit.com/r/KeePass/comments/1lnvw6q/comment/n0jg8ae/

I'm a KeePassXC maintainer. The Copilot PRs are a test drive to speed up the development process. For now, it's just a playground and most of the PRs are simple fixes for existing issues with very limited reach. None of the PRs are merged without being reviewed, tested, and, if necessary, amended by a human developer. This is how it is now and how it will continue to be should we choose to go on with this. We prefer to be transparent about the use of AI, so we chose to go the PR route. We could have also done it locally and nobody would ever know. That's probably how most projects work these days. We might publish a blog article soon with some more details.

The trace of petulance in the response… “we could have done it secretly, that’s how most projects do it” is not the kind of attitude I’m happy to see attached to a security critical piece of software.

[–] dgerard@awful.systems 10 points 1 week ago

KeepArseNX

lead dev: Jia Tan

load more comments (8 replies)
[–] BigMuffN69@awful.systems 18 points 2 weeks ago (2 children)

So, today in AI hype, we are going back to chess engines!

Ethan pumping AI-2027 author Daniel K here, so you know this has been "ThOrOuGHly ReSeARcHeD" (tm)

Taking it at face value, I thought this was quite shocking! Beating a super GM with queen odds seems impossible for the best engines that I know of!! But the first * here is that the chart presented is not classical format. Still, QRR odds beating 1600 players seems very strange, even if weird time odds shenanigans are happening. So I tried this myself and to my surprise, I went 3-0 against Lc0 in different odds QRR, QR, QN, which now means according to this absolutely laughable chart that I am comparable to a 2200+ player!

(Spoiler: I am very much NOT a 2200 player... or a 2000 player... or a 1600 player)

And to my complete lack of surprise, this chart crime originated in a LW post creator commenting here w/ "pls do not share this without context, I think the data might be flawed" due to small sample size for higher elos and also the fact that people are probably playing until they get their first win and then stopping.

Luckily absolute garbage methodologies will not stop Daniel K from sharing the latest in Chess engine news.

But wait, why are LWers obsessed with the latest Chess engine results? Ofc its because they want to make some point about AI escaping human control even if humans start with a material advantage. We are going back to Legacy Yud posting with this one my friends. Applying RL to chess is a straight shot to applying RL to skynet to checkmate humanity. You have been warned!

LW link below if anyone wants to stare into the abyss.

https://www.lesswrong.com/posts/eQvNBwaxyqQ5GAdyx/some-data-from-leelapieceodds

[–] lagrangeinterpolator@awful.systems 14 points 2 weeks ago (1 children)

One of the core beliefs of rationalism is that Intelligence™ is the sole determinant of outcomes, overriding resource imbalances, structural factors, or even just plain old luck. For example, since Elon Musk is so rich, that must be because he is very Intelligent™, despite all of the demonstrably idiotic things he has said over the years. So, even in an artificial scenario like chess, they cannot accept the fact that no amount of Intelligence™ can make up for a large material imbalance between the players.

There was a sneer two years ago about this exact question. I can't blame the rationalists though. The concept of using external sources outside of their bubble is quite unfamiliar to them.

[–] swlabr@awful.systems 10 points 2 weeks ago (2 children)

two years ago

🪦👨🏼➡️👴🏼

since Elon Musk is so rich, that must be because he is very Intelligent™

Will never be able to understand why these mfs don’t see this as the unga bunga stupid ass caveman belief that it is.

[–] mirrorwitch@awful.systems 10 points 2 weeks ago

cos it implies that my overvalued salary as an IT monkey fo parasite companies of no social value is not because I sold my soul to capital owners, it's because I've always been a special little boy who got gold stars in school

load more comments (1 replies)
[–] scruiser@awful.systems 11 points 2 weeks ago

I was wondering why Eliezer picked chess of all things in his latest "parable". Even among the lesswrong community, chess playing as a useful analogy for general intelligence has been picked apart. But seeing that this is recent half-assed lesswrong research, that would explain the renewed interest in it.

[–] gerikson@awful.systems 16 points 2 weeks ago (2 children)
load more comments (2 replies)
[–] sailor_sega_saturn@awful.systems 16 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

NotAwfulTech and AwfulTech converged with some ffmpeg drama on twitter over the past few days starting here and still ongoing. This is about an AI generated security report by Google's "Big Sleep" (with no corresponding Google authored fix, AI or otherwise). Hackernews discussed it here. Looking at ffmpeg's security page there have been around 24 bigsleep reports fixed.

ffmpeg pointed out a lot of stuff along the lines of:

  • They are volunteers
  • They have not enough money
  • Certain companies that do use ffmpeg and file security reports also have a lot of money
  • Certain ffmpeg developers are willing to enter consulting roles for companies in exchange for money
  • Their product has no warranty
  • Reviewing LLM generated security bugs royally sucks
  • They're really just in this for the video codecs moreso than treating every single Use-After-Free bug as a drop-everything emergency
  • Making the first 20 frames of certain Rebel Assault videos slightly more accurate is awesome
  • Think it could be more secure? Patches welcome.
  • They did fix the security report
  • They do take security reports seriously
  • You should not run ffmpeg "in production" if you don't know what you're doing.

All very reasonable points but with the reactions to their tweets you'd think they had proposed killing puppies or something.

A lot of people seem to forget this part of open source software licenses:

BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW

Or that venerable old C code will have memory safety issues for that matter.

It's weird that people are freaking out about some UAFs in a C library. This should really be dealt with in enterprise environments via sandboxing / filesystem containers / aslr / control flow integrity / non-executable memory enforcement / only compiling the codecs you need... and oh gee a lot of those improvements could be upstreamed!

[–] swlabr@awful.systems 12 points 2 weeks ago (3 children)

For a moment there I was worried that ffmpeg had turned fash.

Anyway, amazing job ffmpeg, great responses. No notes

load more comments (3 replies)
[–] saucerwizard@awful.systems 15 points 2 weeks ago

Watching another rationalist type on twitter become addicted to meth. You guys weren’t joking.

(no idea who - just going by the subtweets).

[–] nfultz@awful.systems 14 points 1 week ago (4 children)

“I think the AI slop is great. I think culturally, it’s a good thing that it happened, because one of the things that drove people to start really caring about artists again in 2024 was the AI slop. I think everything happens for a reason,” she said in a recent interview with Time. “Most of the album is sort of about me being a bit of a Diogenes about the ills of modernity while still celebrating them.”

https://www.salon.com/2025/11/07/grimes-ushers-in-a-new-era-of-internet-infestation/

JFC what world does she live in

[–] CinnasVerses@awful.systems 13 points 1 week ago* (last edited 1 week ago) (2 children)

Grimes was married to Elon Musk and performs at events for 'heretical truth-tellers' sponsored by Peter Thiel

[–] TinyTimmyTokyo@awful.systems 11 points 1 week ago

When she's not attending the weddings of people like Curtis Yarvin.

load more comments (1 replies)
load more comments (3 replies)
[–] gerikson@awful.systems 14 points 2 weeks ago (7 children)

Big Yud posts another "banger"[1], and for once the target audience isn't impressed:

https://www.lesswrong.com/posts/3q8uu2k6AfaLAupvL/the-tale-of-the-top-tier-intellect#comments

I skimmed it. It's terrible. It's a long-winded parable about some middling chess player who's convinced he's actually good, and a Socratic strawman in the form of a young woman who needles him.

Contains such Austean gems as this

If you had measured the speed at which the resulting gossip had propagated across Skewers, Washington -- measured it very carefully, and with sufficiently fine instrumentation -- it might have been found to travel faster than the speed of light in vacuum.

In the end, both strawmen are killed by AI-controlled mosquito drones, leaving everyone else feeling relieved .

Commenters seem miffed that Yud isn't cleaning up his act and writing more coherently so as to warn the world of Big Bad AI, but apparently he just can't help himself.


[1] if by banger you mean a long, tedious turd. 42 minute read!

[–] zogwarg@awful.systems 16 points 2 weeks ago (1 children)

Some juicy extracts:

Soon enough then the appointed day came to pass, that Mr. Assi began playing some of the town's players, defeating them all without exception. Mr. Assi did sometimes let some of the youngest children take a piece or two, of his, and get very excited about that, but he did not go so far as to let them win. It wasn't even so much that Mr. Assi had his pride, although he did, but that he also had his honesty; Mr. Assi would have felt bad about deceiving anyone in that way, even a child, almost as if children were people.

Yud: "Woe is me, a child who was lied to!"

Tessa sighed performatively. "It really is a classic midwit trap, Mr. Humman, to be smart enough to spout out words about possible complications, until you've counterargued any truth you don't want to hear. But not smart enough to know how to think through those complications, and see how the unpleasant truth is true anyways, after all the realistic details are taken into account." [...] "Why, of course it's the same," said Mr. Humman. "You'd know that for yourself, if you were a top-tier chess-player. The thing you're not realizing, young lady, is that no matter how many fancy words you use, they won't be as complicated as real reality, which is infinitely complicated. And therefore, all these things you are saying, which are less than infinitely complicated, must be wrong."

Your flaw dear Yud isn't that your thoughts cannot out-compete the complexity of reality, it's that it's a new complexity untethered from the original. Retorts to you wild sci-fi speculations are just minor complications brought by midwits, you very often get the science critically wrong, but expect to still be taken seriously! (One might say you share a lot of Humman misquoting and misapplying "econ 101". )

"Look, Mr. Humman. You may not be the best chess-player in the world, but you are above average. [... Blah blah IQ blah blah ...] You ought to be smart enough to understand this idea."

Funilly enough the very best chess players like Nakamura or Carlsen will readily call themselves dumbasses outside of chess.

"Well, by coincidence, that is sort of the topic of the book I'm reading now," said Tessa. "It's about Artificial Intelligence -- artificial super-intelligence, rather. The authors say that if anyone on Earth builds anything like that, everyone everywhere will die. All at the same time, they obviously mean. And that book is a few years old, now! I'm a little worried about all the things the news is saying, about AI and AI companies, and I think everyone else should be a little worried too."

Of course this a meandering plug to his book!

"The authors don't mean it as a joke, and I don't think everyone dying is actually funny," said the woman, allowing just enough emotion into her voice to make it clear that the early death of her and her family and everyone she knew was not a socially acceptable thing to find funny. "Why is it obviously wrong?"

They aren't laughing at everyone dying, they're laughing at you. I would be more charitable with you if the religion you cultivate was not so dangerous, most of your anguish is self-inflicted.

"So there's no sense in which you're smarter than a squirrel?" she said. "Because by default, any vaguely plausible sequence of words that sounds it can prove that machine superintelligence can't possibly be smarter than a human, will prove too much, and will also argue that a human can't be smarter than a squirrel."

Importantly you often portray ASI as being able to manipulate humans into doing any number of random shit, and you have an unhealthy association of intelligence with manipulation. I'm quite certain I couldn't get at squirrel to do anything I wanted.

"You're not worried about how an ASI [...] beyond what humans have in the way of vision and hearing and spatial visualization of 3D rotating shapes.

Is that... an incel shape-rotator reference?

load more comments (1 replies)
[–] Soyweiser@awful.systems 14 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

If you had measured the speed at which the resulting gossip had propagated across Skewers, Washington – measured it very carefully, and with sufficiently fine instrumentation – it might have been found to travel faster than the speed of light in vacuum.

How do you write like this? How do you pick a normal joking observation and then add more words to make it worse?

load more comments (2 replies)
[–] sinedpick@awful.systems 13 points 2 weeks ago (1 children)

First comment: "the world is bottlenecked by people who just don't get the simple and obvious fact that we should sort everyone by IQ and decide their future with it"

No, the world is bottlenecked by idiots who treat everything as an optimization problem.

[–] jmjm@mstdn.social 13 points 2 weeks ago

@sinedpick @awful.systems @gerikson @awful.systems

The world is hamstrung by people who only believe there is one kind of intelligence, it can be measured linearly, and it is the sole determinant of human value.

The Venn diagram of these people and closet eugenicists looks like a circle if you squint at it.

[–] lagrangeinterpolator@awful.systems 12 points 2 weeks ago (1 children)

The dumb strawman protagonist is called "Mr. Humman" and the ASI villain is called "Mr. Assi". I don't think any parody writer trying to make fun of rationalist writing could come up with something this bad.

The funniest comment is the one pointing out how Eliezer screws up so many basic facts about chess that even an amateur player can see all the problems. Now, if only the commenter looked around a little further and realized that Eliezer is bullshitting about everything else as well.

load more comments (1 replies)
[–] swlabr@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago) (7 children)

42 minute read

Maybe if you're a scrub. 19 minutes baby!!! And that included the minute or so that I thought about copypasting it into a text editor so I could highlight portions to sneer at. Best part of this story is that it is chess themed and takes place in "Skewers", Washington, vs. "Forks", Washington, as made famous by Twilight.

Anyway, what a pile of shit. I choose not to read Yud's stuff most of the time, but I felt that I might do this one. What do you get if you mix smashboards, goofus and gallant strips, that copypasta about needing a high IQ to like rick and morty, and the worst aspects of woody allen? This!

My summary:

Part 1. A chess player, "Mr. Humman", plays a match against "Mr. Assi" and loses. He has a conversation with a romantic interest, "Socratessa", or Tessa for short, about whether or not you can say if someone is better than another in chess. Often cited examples of other players are "Mr. Chimzee" and "Mr. Neumann".

Both "Humman" and "Socratessa" are strawmen. "Socratessa" is described as thus:

One of the less polite young ladies of the town, whom some might have called a troll,

Humman, of course, talks down to her, like so:

"Oh, my dear young lady," Mr. Humman said, quite kindly as was his habit when talking to pretty women potentially inside his self-assessed strike zone

I hate to give credit to Yud here for anything, so here's what I'll say: This characterisation of Humman is so douchey that it's completely transparent that Yud doesn't want you to like this guy. Yud's methodology was to have Humman make strawman-level arguments and portray him as kind of a creep. However, I think what actually happened is that Yud has accidentally replicated arguments/johns you might hear from a smash scrub about why they are not a scrub, but are actually a good player, just with a veneer of chess. So I don't like this character, but not because of Yud's intent.

Socratessa (Tessa for short) is, as gerikson points out, is a Socratic strawman. That's it. It's unclear why Yud describes her as either a troll or pretty. He should have just said she was gallant.* She argues that Elo ratings exist and are good enough at predicting whether one player will beat another. Of course, Humman disagrees, and as the goofus, must be wrong.*

The story should end here, as it has fulfilled its mission as an obvious analog to Yud's whole thing about whether or not you can measure intelligence or say someone is smarter than another.

Part 2. Humman and Socratessa argue about whether or not you can measure intelligence or say someone is smarter than another.

E: if you were wondering, yes, there is eugenics in the story.

E2: forgot to tie up some allusions, specifically the g&g of it all. Marked added sentences with a *.

load more comments (7 replies)
load more comments (2 replies)
[–] zogwarg@awful.systems 13 points 1 week ago* (last edited 1 week ago) (2 children)

Some changes to adventofcode this year, will only have 12-days of puzzles, and no longer have global leaderboard according to the faq:

Why did the number of days per event change?

It takes a ton of my free time every year to run Advent of Code, and building the puzzles accounts for the majority of that time. After keeping a consistent schedule for ten years(!), I needed a change. The puzzles still start on December 1st so that the day numbers make sense (Day 1 = Dec 1), and puzzles come out every day (ending mid-December).

Scaling it a bit down rather than completely burning out is nice i think.

What happened to the global leaderboard?

The global leaderboard was one of the largest sources of stress for me, for the infrastructure, and for many users. People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks. Many people incorrectly concluded that they were somehow worse programmers because their own times didn't compare. What started as a fun feature in 2015 became an ever-growing problem, and so, after ten years of Advent of Code, I removed the global leaderboard. (However, I've made it so you can share a read-only view of your private leaderboard. Please don't use this feature or data to create a "new" global leaderboard.)

While trying to get a fast time on a private leaderboard, may I use AI / watch streamers / check the solution threads / ask a friend for help / etc?

If you are a member of any private leaderboards, you should ask the people that run them what their expectations are of their members. If you don't agree with those expectations, you should find a new private leaderboard or start your own! Private leaderboards might have rules like maximum runtime, allowed programming language, what time you can first open the puzzle, what tools you can use, or whether you have to wear a silly hat while working.

Probably the most positive change here, it's a bit of shame we can't have nice things, a no real way to police stuff like people using AI for leaderboard times. Still keeping the private one, for smaller groups of people, that can set expectations is unfortunately the only pragmatic thing to do.

Should I use AI to solve Advent of Code puzzles?

No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.

It's nice to know the creator (Eric Wastl) has a good head on his shoulders.

I feel like the private leaderboards are also more in keeping with the spirit of the thing. You can't really have a friendly competition with a legion of complete strangers that you have no interaction with outside of comparing final times. Even when there's nothing on the line the consequences for cheating or being a dick are nonexistent, whereas in a a private group you have to deal with all your friends knowing you're an asshole going forward.

load more comments (1 replies)
[–] o7___o7@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

A redditor posted the latest Pivot to AI propaganda to r/betteroffline, where it currently has around 560 votes. This upset and confused a great many prompt enthusiasts in the comments, which goes to show that a kicked dog yelps.

https://old.reddit.com/r/BetterOffline/comments/1onwcdq/using_generative_ai_youre_prompting_with_hitler/

[–] swlabr@awful.systems 11 points 2 weeks ago (2 children)

ITT: new synonym for promptfondler: “brain cuck”

load more comments (2 replies)
[–] BigMuffN69@awful.systems 10 points 2 weeks ago

Pls dont kick dogs 😭

[–] gerikson@awful.systems 13 points 2 weeks ago (4 children)

Thoughts / notes on Nostr? A local on a tech site is pushing it semi-hard, and I just remember it being mentioned in the same breath as Bluesky back in the day. It ticks a lot of techfash boxes - decentralized, "uncensorable", has Bitcoin's stupid Lightning protocol built in.

[–] mawhrin@awful.systems 15 points 2 weeks ago (3 children)

nostr neatly covers all obsessions of dorsey. it's literally fash-tech (original dev, fiatjaf, is a right-wing nutjob; and current development is driven by alex gleason of the truth dot social fame), deliberately designed to be impossible to moderate (“censorship-resilient”); the place is full of fascists, promptfondlers and crypto dudes.

load more comments (3 replies)
[–] fullsquare@awful.systems 12 points 2 weeks ago (1 children)

exploding-heads, openly trumpist lemmy instance, fucked off there when admin got bored of baiting normal people, make of that what you will

load more comments (1 replies)
[–] swlabr@awful.systems 11 points 2 weeks ago

Jack Dorsey seems to like throwing money at it:

Jack Dorsey, the co-founder of Twitter, has endorsed and financially supported the development of Nostr by donating approximately $250,000 worth of Bitcoin to the developers of the project in 2023, as well as a $10 million cash donation to a Nostr development collective in 2025.

(source: wiki)

load more comments (1 replies)
[–] swlabr@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago)

More wiki drama: Jimbo tries to both sides the gaza genocide

E: just for clarity. Jimbo is the canon nickname of founder Jimmy Wales.

And just to describe a little more of what has happened, as far as I can tell: Wales is reportedly being interviewed about Wikipedia (probably due to the grookiepedia stuff). He was asked in a "high profile media interview" (his words, see first link) about the Gaza genocide article, and said that it "fails to meet our high standards and needs immediate attention". Part of that attention is that they've locked the article, and Jimbo has joined the talk page. His argument probably boils down to this comment he left:

Let's start with this quote from WP:NPOV: "Avoid stating seriously contested assertions as facts. If different reliable sources make conflicting assertions about a matter, treat these assertions as opinions rather than facts, and do not present them as direct statements." Surely you aren't going to argue that the core assertion of the article is not seriously contested?

The "core assertion" is contained in the lede:

The Gaza genocide is the ongoing, intentional, and systematic destruction of the Palestinian people in the Gaza Strip carried out by Israel during the Gaza war.

i.e. that there is a genocide happening at all.

Gizmodo article, in case this comment sucks in some way and you wanted to read a different report.

[–] rook@awful.systems 11 points 1 week ago (5 children)

It’s everyone’s favourite alternate browser developer back again, lamenting how mean some tech folk are and how cruelly they threaten and oppress certain groups of people.

Which groups? Oh, you know the ones 😉

spoilerA screenshot of a twitter post by Andreas Kling, reading:

In recent years l've attended multiple software conference talks that had unrelated extreme political rhetoric in slides, such as "fuck [name]" and "punch [group]".

Whenever this happened, some of the audience would clap and cheer, l'd roll my eyes, and the talk would get back on topic.

Fast-forward to today, and look at how many people in our industry are openly celebrating the murder of someone they decided was a "nazi" and "fascist". Turns out these people were more serious than I thought.

As someone who's repeatedly been called a "nazi" and "fascist" myself for disagreements with far-left ideology, I know how easily those labels get thrown around. And honestly, this is making me seriously reconsider which conferences I attend.

There's a hateful rot within our industry. It shouldn't be socially acceptable to cheer for murder. We need to do more than roll our eyes.

Source: https://goblin.band/notes/aeui8zv7rw80c08v

load more comments (5 replies)
[–] sc_griffith@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (11 children)

apologies for just linking to my own bsky post but I'm lazy: https://bsky.app/profile/scgriffith.bsky.social/post/3m4qjnkeyls23

tl;dr I've gotten a bit suspicious that "AI users will be genocided" posts on reddit are a nazi op

[–] swlabr@awful.systems 10 points 2 weeks ago

not outside of the fascist playbook to claim that they are the real victims. The example that comes to mind is the myth of white genocide, but also literally any fascist rhetoric is like that.

It’s well trodden ground to say that genAI usage and support for genAI resonates with populist/reactionary/fascist themes in that it inherently devalues and dehumanises, and it promotes anti-intellectualism. If you can be replaced by AI, what worth do you have? And why think if the AI can do it for you?

So, of course this stuff being echoed in spaces where the majority are ignorant to the nazi tilt. They can’t and don’t understand fascism on a structural level, they can only identify it when it’s trains and gas chambers.

load more comments (10 replies)
[–] BlueMonday1984@awful.systems 10 points 2 weeks ago (1 children)

Checked back on the smoldering dumpster fire that is Framework today.

Linux Community Ambassadors Tommi and Fraxinas have jumped ship, sneering the company's fash turn on the way out.

load more comments (1 replies)
[–] o7___o7@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (8 children)

There's a Charles Stross novel from 2018 where cultists take over the US government and begin a project to build enough computational capacity to summon horrors from beyond space-time (in space). It's called The Labyrinth Index and it's very good!

So anyway, this happened:

https://www.wsj.com/tech/ai/openai-isnt-yet-working-toward-an-ipo-cfo-says-58037472

Also, this:

https://bsky.app/profile/edzitron.com/post/3m4wrv2xak22x

load more comments (8 replies)
[–] mirrorwitch@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Anyone knows who's (presumably) Tor from the "Tor's Cabinet of Curiosities" Youtube channel and what's up with his ideological commitments? Somebody recommended me this video on some Wikipedia grifter, I was enjoying it until suddenly (ca. 23:20 ) he name-drops Scott Alexander as “a writer whom I’m a big fan of”. I thought, should somebody tell him. Then I looked up and the guy has an entire video on subtypes of rationalists, so he knows, and chose to present as a fan anyway. Huh. However as far as a cursory glance goes the channel doesn't seem to bat for, you know, "human biodiversity". (I haven't watched the rat video because I don't want to ruin my week)

load more comments (1 replies)
[–] sc_griffith@awful.systems 10 points 2 weeks ago (4 children)

wild article about content scraping nonprofit common crawl

https://www.theatlantic.com/technology/2025/11/common-crawl-ai-training-data/684567/?gift=iWa_iB9lkw4UuiWbIbrWGQv84IP0_-K67yuVC013Fx4

tl;dr they've been faking deleting data upon request (in ways that I find very funny) and their head is noxious even for a tech bro

also is it just me or does SV have a particular gift for perverting the nonprofit concept

load more comments (4 replies)
[–] fullsquare@awful.systems 10 points 2 weeks ago (3 children)

in terms of zitron fallout, there used to be a comment section at his blog, it's not there anymore

load more comments (3 replies)
[–] fullsquare@awful.systems 10 points 2 weeks ago

fyi over the last couple of days firefox added perplexity as search engine, must have been as an update

load more comments
view more: next ›