Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 5 points 1 month ago (4 children)

I was very tempted to go 'don't think it is more than one nobel guy, which is not great because of nobel disease anyway. I could link to rationalwiki here but that has come under threat because the people whos content you enjoy Scott started a lawsuit against them' but think that might be a bit culturewarry, and I also try not to react at the places we point towards. As that just leads to harassment like behaviour. Also Penrose is a Nobel prize winner who is against AGI stuff.

[–] Soyweiser@awful.systems 7 points 1 month ago (3 children)

That is the one I was thinking of, the way the comments are phrased makes it seem like there are a lot of winners who are doomers. Guess Hinton is a one man brigade.

[–] Soyweiser@awful.systems 12 points 1 month ago (11 children)

Yeah the financial illiteracy is quite high, on top of the rest. But dont worry AI nobel prize winners say it is possible!

(Are there multiple ai Nobel prize winners who are ai doomers?)

[–] Soyweiser@awful.systems 6 points 1 month ago* (last edited 1 month ago)

That gives me a 'you broke reddit' jackrobertsofficial is also empty for me (and empty if I use an incognito window, so I'm not blocked). I got the feeling that might be what was going on. Even if I had a hard time finding his old work, as the news articles he links on his own site were dead.

E: tried on my phone and it appears wtf, no wait. It is promoted, my addblockers just nuked it haha, my bad.

[–] Soyweiser@awful.systems 5 points 1 month ago* (last edited 1 month ago) (2 children)

Seems it was deleted. But due to reddit being reddit I noticed it pointed towards the 'Swat Man: Volume 1 Kindle Edition' amazon link. (Which I have not reproduced here)

E: ah nevermind aggressive adblockers deleted it on my end.

[–] Soyweiser@awful.systems 16 points 1 month ago* (last edited 1 month ago) (18 children)

and that’s how we should view the eventual AGI-LLMs, like wittle Elons that don’t need sleep.

Wonder how many people stopped being AI-doomers after this. I use the same argument against ai-doom.

E: the guy doing the most basic 'It really is easier to imagine the end of the world than the end of capitalism.' bit in the comments and have somebody just explode in 'not being able to imagine it properly' is a bit amusing. I know how it feels to just have a massive hard to control reaction over stuff like that but oof what are you doing man. And that poor anti-capitalist guy is in for a rude awakening when he discovers what kind of place r/ssc is.

E2: Scott is now going 'this clip is taken out of context!' not that the context improves it. (He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance? Hope this Scott guy doesn't have a history of lying about his real beliefs).

[–] Soyweiser@awful.systems 9 points 1 month ago* (last edited 1 month ago)

The amount of testing they would have needed to do just to get to that prompt. Wait, that gets added as a baseline constant cost to the energy cost of running the model. 3 x 12 x 2 x Y additional constant costs on top of that, assuming the prompt doesn't need to be updated every time the model is updated! (I'm starting to reference my own comments here).

Claude NEVER repeats or translates song lyrics and politely refuses any request regarding reproduction, repetition, sharing, or translation of song lyrics.

New trick, everything online is a song lyric.

[–] Soyweiser@awful.systems 9 points 1 month ago (2 children)

More of a notedump than a sneer. I have been saying every now and then that there was research and stuff showing that LLMs require exponentially more effort for linear improvements. This is post by Iris van Rooij (Professor of Computational Cognitive Science) mentions something like that (I said something different, but The intractability proof/Ingenia theorem might be useful to look into): https://bsky.app/profile/irisvanrooij.bsky.social/post/3lpe5uuvlhk2c

[–] Soyweiser@awful.systems 3 points 1 month ago* (last edited 1 month ago)

The 'energy usage by a single chatgpt' thing gets esp dubious when added to the 'bunch of older models under a trenchcoat' stuff. And that the plan is to check the output of a LLM by having a second LLM check it. Sure the individual 3.0 model might only by 3 whatevers, but a real query uses a dozen of them twice. (Being a bit vague with the numbers here as I have no access to any of those).

E: also not compatible with Altmans story that thanking chatgpt cost millions. Which brings up another issue, a single query is part of a conversation so now the 3 x 12 x 2 gets multiplied even more.

[–] Soyweiser@awful.systems 9 points 1 month ago* (last edited 1 month ago)

Ai is part of Idiocracy. The automatic layoffs machine. For example. And do not think we need more utopian movies like Idiocracy.

[–] Soyweiser@awful.systems 8 points 1 month ago

Too late im already simulating everybody in this thread in my mind.

[–] Soyweiser@awful.systems 9 points 1 month ago* (last edited 1 month ago) (4 children)

Uber but for vitrue signalling (*).

(I joke, because other remarks I want to make will get me in trouble).

*: I know this term is very RW coded, but I don't think it is that bad, esp when you mean it like 'an empty gesture with a very low cost that does nothing except for signal that the person is virtuous.' Not actually doing more than a very small minimum should be part of the definition imho. Stuff like selling stickers you are pro some minority group but only 0.05% of each sale goes to a cause actually helping that group. (Or the rich guys charity which employs half his family/friends, or Mr Beast, or the rightwing debate bro threatening a leftwinger with a fight 'for charity' (this also signals their RW virtue to their RW audience (trollin' and fightin')).

view more: ‹ prev next ›