this post was submitted on 06 Apr 2025
21 points (100.0% liked)

TechTakes

1778 readers
85 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this..)

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 3 days ago (1 children)

I feel like some of the doomers are already setting things up to pivot when their most major recent prophecy (AI 2027) fails:

From here:

(My modal timeline has loss of control of Earth mostly happening in 2028, rather than late 2027, but nitpicking at that scale hardly matters.)

It starts with some rationalist jargon to say the author agrees but one year later...

AI 2027 knows this. Their scenario is unrealistically smooth. If they added a couple weird, impactful events, it would be more realistic in its weirdness, but of course it would be simultaneously less realistic in that those particular events are unlikely to occur. This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027, but the median narrative is probably around 2030 or 2031.

Further walking the timeline back, adding qualifiers and exceptions that the authors of AI 2027 somehow didn't explain before. Also, the reason AI 2027 didn't have any mention of Trump blowing up the timeline doing insane shit is because Scott (and maybe some of the other authors, idk) like glazing Trump.

I expect the bottlenecks to pinch harder, and for 4x algorithmic progress to be an overestimate...

No shit, that is what every software engineering blogging about LLMs (even the credulous ones) say, even allowing LLMs get better at raw code writing! Maybe this author is better in touch with reality than most lesswrongers...

...but not by much.

Nope, they still have insane expectations.

Most of my disagreements are quibbles

Then why did you bother writing this? Anyway, I feel like this author has set themselves up to claim credit when it's December 2027 and none of AI 2027's predictions are true. They'll exaggerate their "quibbles" into successful predictions of problems in the AI 2027 timeline, while overlooking the extent to which they agreed.

I'll give this author +10 bayes points for noticing Trump does unpredictable batshit stuff, and -100 for not realizing the real reason why Scott didn't include any call out of that in AI 2027.

[–] [email protected] 15 points 3 days ago* (last edited 3 days ago) (3 children)

I'll give this author +10 bayes points for noticing Trump does unpredictable batshit stuff

+10 bayes points

Has someone on LW already proposed a BayesCoin or have I just figured out how to steal lunch money from all rationalists at once

[–] [email protected] 5 points 2 days ago (1 children)

it's a fucking harry potter reference

[–] [email protected] 6 points 2 days ago (1 children)

HP fanfic but house points are crypto and the chocolate frog cards are NFTs tracked on a magical blockchain

[–] [email protected] 5 points 2 days ago* (last edited 2 days ago)

Non Fungible Toads

Edit: this is a nonsequitur, but my wife just shared this with me and it is delightful

[–] [email protected] 9 points 2 days ago

Without looking it up i can tell you that coin already exists and the value has crashed already

[–] [email protected] 9 points 3 days ago

With a name like that and lesswrong to springboard it's popularity, BayesCoin should be good for at least one cycle of pump and dump/rug-pull.

Do some actual programming work (or at least write a "white paper") on tying it into a prediction market on the blockchain and you've got rationalist catnip, they should be all over it, you could do a few cycles of pumping and dumping before the final rug pull.