Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 4 points 13 hours ago (1 children)

Yeah, I intentionally only mentioned the start of the article and the Swartz bit because I didn't want to lead with what I thought of it all, and was curious what others thought. (And I had not finished it yet because it is a bit long).

I was struck with the notion how many of them are all true AGI believers (which as you said the author took at face value) or rich greedy assholes (like you said), and how we, the people of the sneer, are right that you simply can't work with these people. Like I feel more validated in the idea that EA is not the right way.

Another detail I noticed, nobody mentioned deepseek, again.

[–] Soyweiser@awful.systems 2 points 13 hours ago

Yep, and would make us all happier, and keep us in control. (deleting all the HP printers is next).

[–] Soyweiser@awful.systems 3 points 1 day ago

Very interesting, thanks for posting.

[–] Soyweiser@awful.systems 11 points 1 day ago* (last edited 1 day ago) (7 children)

New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).

"New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI."

[–] Soyweiser@awful.systems 9 points 1 day ago

Which skeletons are in your closet?

I'm sure you already have lists of those and are ready to publish them Trace.

[–] Soyweiser@awful.systems 3 points 1 day ago (2 children)

Our framing for superintelligence is a humanist superintelligence, and that means that there’s a very clear test that everyone should use to judge whether we are living up to our principles, and that is: does this technology make us all healthier, happier as a species, and keep us all in control.

Going to be difficult, as soon as they develop a superintelligence it tries to delete the entire microsoft codebase.

[–] Soyweiser@awful.systems 5 points 3 days ago (1 children)

So if Bender took over he wouldn't count. As he wants to 'kill all humans (except Fry)'. Seems like a loophole.

[–] Soyweiser@awful.systems 5 points 4 days ago* (last edited 4 days ago)

Ah the Epstein drive. (oof that aged...)

Small note however, iirc James S. A. Corey has mentioned the expanse is not hard sf. I don't have a quote for that however.

[–] Soyweiser@awful.systems 7 points 4 days ago

Yeah realized a while ago that vibe coding is a massive technical debt creation machine.

[–] Soyweiser@awful.systems 5 points 4 days ago* (last edited 4 days ago) (2 children)

Not just anime but also science fiction. See also all the people who love 'hard' science fiction (science fiction more based on real world physics), which often isn't that hard at all but just has a few real physics element, see the expanse for a good example of non-hard sf that feels hard (im finally reading the book series so be warned I might expanse post a bit).

content warning discussion about sexual abuse thropeA similar thing happens with people who confuse edgy/grimdark/vile fiction with realistic. (A while back I played a video game which had a reference to women being captured for breeding and men for other sexual abuse (which made no sense in the setting, as these slaver faction already were resource starved, and poisoned so they died quickly, so no way they could raise kids into maturity in that environment (also iirc the slaver faction was less than 20 years old)). Which some players described as very realistic (people do the same about 40k, almost like it says something about their ideas of how the world works not the setting). I was just rolling my eyes and didnt comment. Apart from that it seemed ok. Crying suns is the name of the game for the people who want to avoid it for this reason (it wasnt a big plot point).

Sorry for being a bit offtopic and talking about entertainment again.

[–] Soyweiser@awful.systems 7 points 5 days ago

It is great, that means the system is vulnerable to hacks if you find an exploit in any of those methods, but only 1/4th of the time.

Somebody described AI agents as very enthusiastic 14 year olds, and looks like they certainly code like one.

[–] Soyweiser@awful.systems 9 points 5 days ago

Word of warning, there is a code download going round with mallware in it: https://www.theregister.com/2026/04/02/trojanized_claude_code_leak_github/

 

Via reddits sneerclub. Thanks u/aiworldism.

I have called LW a cult incubator for a while now, and while the term has not catched on, nice to see more reporting on the problem that lw makes you more likely to join a cult.

https://www.aipanic.news/p/the-rationality-trap the original link for the people who dont like archive.is used the archive because I dont like substack and want to discourage its use.

 

As found by @gerikson here, more from the anti anti TESCREAL crowd. How the antis are actually R9PRESENTATIONALism. Ottokar expanded on their idea in a blog post.

Original link.

I have not read the bigger blog post yet btw, just assumed it would be sneerable and posted it here for everyone's amusement. Learn about your own true motives today. (This could be a troll of course, boy does he drop a lot of names and thinks that is enough to link things).

E: alternative title: Ideological Turing Test, a critical failure

 

Original title 'What we talk about when we talk about risk'. article explains medical risk and why the polygenic embryo selection people think about it the wrong way. Includes a mention of one of our Scotts (you know the one). Non archived link: https://theinfinitesimal.substack.com/p/what-we-talk-about-when-we-talk-about

11
submitted 10 months ago* (last edited 10 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems
 

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called 'Death and the Gorgon' by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit.

I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real 'steelman' airtime to the subjects/subjects it is being negative about. But well, he doesn't have to, he isn't the guardian. Anyway, not going to spoil it, best to just give it a read.

And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (Warning, spoilers for the story)

(Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

view more: next ›