swlabr

joined 2 years ago
[–] swlabr@awful.systems 4 points 22 hours ago

Hey, at least it’s efficiently making number 2 on the side while spitting out user prompted number 2s.

[–] swlabr@awful.systems 7 points 1 day ago

Derpadoid Burpateria

[–] swlabr@awful.systems 8 points 1 day ago (3 children)

Wow, that highlighting really emphasises the insidious, nefarious behaviour. This is only a hop, skip, and jump away from, what was it again? Rhomboid? Rheumatoid bactothefuture?

[–] swlabr@awful.systems 3 points 1 day ago* (last edited 1 day ago)

I read a review (the one hosted on the ebert site) and it seems like this just falls into one of the patterns we’ve already seen when other people not steeped in the X risk miasma engage with it. As in, what should be a documentary about how the AI industry is a bubble and that all the AI ceos are grifters or deluded or both, is instead a “somehow I managed to fall for Yud’s whole thing and am now spreading the word” type deal. Big sigh!

[–] swlabr@awful.systems 3 points 1 day ago

Nonono if it’s US backed then it’s capitalist and free market and good don’t you see /s

[–] swlabr@awful.systems 4 points 1 day ago (1 children)

Not engaging in debate club remans winning

[–] swlabr@awful.systems 5 points 1 day ago (1 children)

I feel like I nailed my guess

[–] swlabr@awful.systems 5 points 2 days ago

Reads like bad blaseball fanfic

[–] swlabr@awful.systems 6 points 2 days ago (5 children)

Pure speculation: my guess is that an “apocaloptimist” is just someone fully bought into all of the rationalist AI delulu. Specifically:

  • AGI is possible
  • AGI will solve all our current problems
  • A future where AGI ends humanity is possible/probable

and they take the extra belief, steeped in the grand tradition of liberal optimism, that we will solve the alignment problem and everything will be ok. Again, just guessing here.

 

Full doc title: “The AI Doc: Or How I Became an Apocaloptimist”

Per wiki:

The AI Doc: Or How I Became an Apocaloptimist is a 2026 American documentary film directed by Daniel Roher and Charlie Tyrell. It is produced by the Academy Award-winning teams behind Everything Everywhere All at Once (Daniel Kwan and Jonathan Wang) and Navalny (Shane Boris and Diane Becker).

What to say here? This is a doc being produced by the producer and one of the directors of Everything Everywhere All At Once, who notably have been making efforts to, uh, negotiate? I guess? with AI companies vis a vis making movies. Anyway the title is a piece of shit and this trailer makes it look like this is just critihype the movie. I guess we’ll hear more about it in the coming month.

Really interesting framing this as brought about by thinking about the director’s child, given Yud’s recent comments about how one should raise a daughter if you had certain beliefs about AI.

[–] swlabr@awful.systems 9 points 1 week ago

obligatory: if books could kill did an ep on his big book "sapiens": https://www.buzzsprout.com/2040953/episodes/18220972-sapiens

[–] swlabr@awful.systems 17 points 1 week ago

fuck this tweet and fuck yud

[–] swlabr@awful.systems 13 points 1 week ago (1 children)

It's his alt for people who want more yud spam, hence "all the yud." From his twitter bio:

This is my serious low-volume account. Follow @allTheYud for the rest.

 

Thought this essay had some interesting things to say. It speaks directly to the existence of tech takes overall, specifically those coming from the “oligarch-intellectuals”. Tried to quote some things to give an overview:

There is a certain disorienting thrill in witnessing, over the past few years, the profusion of bold, often baffling, occasionally horrifying ideas pouring from the ranks of America’s tech elite.

To write off these founders and executives as mere showmen—more “public offering” than “public intellectual”—would be a misreading. For one, they manufacture ideas with assembly-line efficiency: their blog posts, podcasts, and Substacks arrive with the subtlety of freight trains. And their “hot takes,” despite vulgar packaging, are often grounded in distinct philosophical traditions. Thus, what appears as intellectual fast food – the ultra-processed thought-nuggets deep fried in venture capital – often conceals wholesome ingredients sourced from a gourmet pantry of quite some sophistication.

Today, it’s increasingly clear that it’s the tech oligarchs — not their algorithmically-steered platforms—who present the greater danger. Their arsenal combines three deadly implements: plutocratic gravity (fortunes so vast they distort reality’s basic physics), oracular authority (their technological visions treated as inevitable prophecy), and platform sovereignty (ownership of the digital intersections where society’s conversation unfolds). Musk’s takeover of Twitter (now X), Andreessen’s strategic investments into Substack, Peter Thiel’s courting of Rumble, the conservative YouTube: they’ve colonized both the medium and the message, the system and the lifeworld.

E: this was linked closer to its original publish date here

 

Peep the signatories lol.

Edit: based on some of the messages left, I think many, if not most, of these signatories are just generally opposed to AI usage (good) rather than the basilisk of it all. But yeah, there’s some good names in this.

 

Hi folks, another shitty story from the slop-pocalypse ((AI-)slopalypse?).

Archive link

Article from billboard, archive

NB: I think this story is bullshit. I imagine some parts are true, but there's no concrete source given for the "$3 million" figure. So it's my speculation that this story is hype cooked up by Suno (the AI company enabling this all) and thrown at publishers for an easy headline. Also the human behind this has their name spelled differently in the two articles, so clearly some quality journalism is happening.

 

originally posted to the stubsack but it makes more sense as a top level post.

 

(Archive)

Tickled pink that BI has decided to platform the AI safety chuds. OFC, the more probable reason of “more dosh” gets mentioned, but most of the article is about how Anthropic is more receptive to addressing AI safety and alignment.

 

Burns said the driving force behind the Runway deal was to allow filmmakers to “make movies and television shows we’d otherwise never make. We can’t make it for $100 million, but we’d make it for $50 million because of AI… We’re banging around the art of the possible. Let’s try some stuff, see what sticks.”

read: "I huffed my own farts and passed out. This gave me a dream where we made a film via promptfondling. I decided that I'll make a press release with made up numbers based on that dream."

As reported by New York Magazine: “With a library as large as Lionsgate’s, they could use Runway to repackage and resell what the studio already owned, adjusting tone, format and rating to generate a softer cut for a younger audience or convert a live-action film into a cartoon.”

read: "There's no need to do requels like disney does. The serfs will gobble the slop and they'll like it. After all, why risk creating new jobs or any creative output when we could just melt the ice caps instead?"

As for another example of how the studio can use AI, Burns said to consider this scenario: “We have this movie we’re trying to decide whether to green-light. There’s a 10-second shot — 10,000 soldiers on a hillside with a bunch of horses in a snowstorm.” Using Runaway’s AI technology, the studio can avoid a pricy film shoot that would cost millions and take a few days and use AI to create the shot for about $10,000.

read: "Here's a bottle of my farts. Smell it. Feeling dizzy? Good. Now imagine a scenario where you're looking at your bank account, and instead of number go down, number go up. Isn't that nice? Have another whiff."

 

Take that, Saltman! Bet you never thought it was possible!

27
submitted 9 months ago* (last edited 9 months ago) by swlabr@awful.systems to c/techtakes@awful.systems
 

Original Title: Elizabeth Holmes’s Partner Has a New Blood-Testing Start-Up

Billy Evans has two children with the Theranos founder, who is in prison for fraud. He’s now trying to raise money for a testing company that promises “human health optimization.”

Original link: https://www.nytimes.com/2025/05/10/business/elizabeth-holmes-partner-blood-testing-startup.html

 

Original NYT title: Billionaire Airbnb Co-Founder Is Said to Take Role in Musk’s Government Initiative

 

Original link

OFC if there were any real sense or justice in the world, LLMs would be banned outright.

view more: next ›