Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 2 points 6 hours ago

That reminds me, remember there is an Xbox boycott going on for all the gamers out there. (Saw that after the boycot was started, both steam and humble pushed xbox game sales, the timing of which is very iffy).

[–] Soyweiser@awful.systems 5 points 13 hours ago* (last edited 11 hours ago)

Deleted earlier message, sorry I called Scott out for not doing things he had done. Even if the whole mods 'restricting her messages now only after she went after Scott' is quite iffy. (LW people write normally challenge failed "One upfront caveat. I am speaking about “Kat Woods” the public figure, not the person. If you read something here and think, “That’s not a true/nice statement about Kat Woods”, you should know that I would instead like you to think “That’s not a true/nice statement about the public persona Kat Woods, the real human with complex goals who I'm sure is actually really cool if I ever met her, appears to be cultivating.”" (The idea is good, this just reads like a bit of a sovcit style text and could have been replaced with 'I mean this not as an attack on her personally, I'm just doubting the effectiveness of her spammy posting style'). (E: I do agree with them however, not the 'we should check if this is effective' but more that the posting style is low effort, annoying, boring, dated, bit cringe etc).

Also: Scott: 'Mods mods mods, kat spill my jice help hel help help'

[–] Soyweiser@awful.systems 4 points 13 hours ago (3 children)

I was very tempted to go 'don't think it is more than one nobel guy, which is not great because of nobel disease anyway. I could link to rationalwiki here but that has come under threat because the people whos content you enjoy Scott started a lawsuit against them' but think that might be a bit culturewarry, and I also try not to react at the places we point towards. As that just leads to harassment like behaviour. Also Penrose is a Nobel prize winner who is against AGI stuff.

[–] Soyweiser@awful.systems 6 points 15 hours ago (3 children)

That is the one I was thinking of, the way the comments are phrased makes it seem like there are a lot of winners who are doomers. Guess Hinton is a one man brigade.

[–] Soyweiser@awful.systems 10 points 15 hours ago (10 children)

Yeah the financial illiteracy is quite high, on top of the rest. But dont worry AI nobel prize winners say it is possible!

(Are there multiple ai Nobel prize winners who are ai doomers?)

[–] Soyweiser@awful.systems 6 points 15 hours ago* (last edited 15 hours ago)

That gives me a 'you broke reddit' jackrobertsofficial is also empty for me (and empty if I use an incognito window, so I'm not blocked). I got the feeling that might be what was going on. Even if I had a hard time finding his old work, as the news articles he links on his own site were dead.

E: tried on my phone and it appears wtf, no wait. It is promoted, my addblockers just nuked it haha, my bad.

[–] Soyweiser@awful.systems 5 points 17 hours ago* (last edited 15 hours ago) (2 children)

Seems it was deleted. But due to reddit being reddit I noticed it pointed towards the 'Swat Man: Volume 1 Kindle Edition' amazon link. (Which I have not reproduced here)

E: ah nevermind aggressive adblockers deleted it on my end.

[–] Soyweiser@awful.systems 15 points 17 hours ago* (last edited 15 hours ago) (12 children)

and that’s how we should view the eventual AGI-LLMs, like wittle Elons that don’t need sleep.

Wonder how many people stopped being AI-doomers after this. I use the same argument against ai-doom.

E: the guy doing the most basic 'It really is easier to imagine the end of the world than the end of capitalism.' bit in the comments and have somebody just explode in 'not being able to imagine it properly' is a bit amusing. I know how it feels to just have a massive hard to control reaction over stuff like that but oof what are you doing man. And that poor anti-capitalist guy is in for a rude awakening when he discovers what kind of place r/ssc is.

E2: Scott is now going 'this clip is taken out of context!' not that the context improves it. (He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance? Hope this Scott guy doesn't have a history of lying about his real beliefs).

[–] Soyweiser@awful.systems 9 points 1 day ago* (last edited 1 day ago)

The amount of testing they would have needed to do just to get to that prompt. Wait, that gets added as a baseline constant cost to the energy cost of running the model. 3 x 12 x 2 x Y additional constant costs on top of that, assuming the prompt doesn't need to be updated every time the model is updated! (I'm starting to reference my own comments here).

Claude NEVER repeats or translates song lyrics and politely refuses any request regarding reproduction, repetition, sharing, or translation of song lyrics.

New trick, everything online is a song lyric.

[–] Soyweiser@awful.systems 8 points 1 day ago (2 children)

More of a notedump than a sneer. I have been saying every now and then that there was research and stuff showing that LLMs require exponentially more effort for linear improvements. This is post by Iris van Rooij (Professor of Computational Cognitive Science) mentions something like that (I said something different, but The intractability proof/Ingenia theorem might be useful to look into): https://bsky.app/profile/irisvanrooij.bsky.social/post/3lpe5uuvlhk2c

[–] Soyweiser@awful.systems 3 points 1 day ago* (last edited 1 day ago)

The 'energy usage by a single chatgpt' thing gets esp dubious when added to the 'bunch of older models under a trenchcoat' stuff. And that the plan is to check the output of a LLM by having a second LLM check it. Sure the individual 3.0 model might only by 3 whatevers, but a real query uses a dozen of them twice. (Being a bit vague with the numbers here as I have no access to any of those).

E: also not compatible with Altmans story that thanking chatgpt cost millions. Which brings up another issue, a single query is part of a conversation so now the 3 x 12 x 2 gets multiplied even more.

[–] Soyweiser@awful.systems 9 points 2 days ago* (last edited 2 days ago)

Ai is part of Idiocracy. The automatic layoffs machine. For example. And do not think we need more utopian movies like Idiocracy.

11
submitted 6 days ago* (last edited 6 days ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems
 

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called 'Death and the Gorgon' by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit.

I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real 'steelman' airtime to the subjects/subjects it is being negative about. But well, he doesn't have to, he isn't the guardian. Anyway, not going to spoil it, best to just give it a read.

And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (Warning, spoilers for the story)

(Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

view more: next ›