hrrrngh

joined 2 years ago
[–] hrrrngh@awful.systems 7 points 1 day ago

https://www.psychologytoday.com/us/blog/harnessing-hybrid-intelligence/202511/the-psychology-of-collective-abandonment

Article I found randomly because... I was trying to add the Psychology Today blog to uBlacklist so I stop seeing their articles lol

It lost me a little towards the end, but it's heartwarming to imagine a world where tech fascists screaming about the Antichrist have a few* billion dollars less and actual charities have a few more.

*where few = [3, ∞)

[–] hrrrngh@awful.systems 7 points 2 days ago

Many of these tools are useful, and don’t use generative AI – that is, AI that creates – but use AI to summarize texts or alter images.

Oh no, has this become the common definition of generative AI? I'm guessing some AI company must have tried to launder the name and make it seem less bad. Both of those examples are clear-cut generative AI.

[–] hrrrngh@awful.systems 6 points 2 days ago

I finally became fed up with it and got around to writing a uBlock Origin filter that removes the AI overview, the AI results in the "People also ask" section, and especially the AI results in the "Things to know" section that usually covers health and drug information. There is literally so much AI bloat taking up the search page it's crazy.

[–] hrrrngh@awful.systems 5 points 4 days ago

Fortunately the EA side is a little more on the nose sometimes.

One of my first wakeup calls was they offered to mail me a book for free🚩🚩🚩 (it was from 80,000 hours)

[–] hrrrngh@awful.systems 11 points 6 days ago* (last edited 6 days ago) (1 children)

I've seen the same thing and it's reassuring lol.

I lurk on subreddit drama and curated tumblr, and I feel like the common reaction to LW has gone from a few negative comments and "really? that's crazy"'s five years ago to being much more aware. Years ago you'd see maybe one person familiar with them and then a couple people respond who are totally out of the loop and maybe you'd see one crazy rationalist chime in to nuh-uh them. Now, anything rationalist-related usually has a bunch of people bringing up the harry potter or acausal robot god stuff right away.

I use the tag feature a lot in RES to keep track of people who I like hearing what they have to say. Years ago I mostly saw the same names when LW stuff came up, but now there's always a ton of people I've never seen before who are familiar with it.

It's also reassuring because I really don't want to be the person to say anything first and it's easier to chime in on a discussion someone else has already started.

[–] hrrrngh@awful.systems 6 points 1 week ago

Why not make an evil time travelling robot controlled by the illuminati? ~~bro it's even called Alexander~~

Maybe they simply yearn to write Final Fantasy villains

[–] hrrrngh@awful.systems 12 points 1 week ago (15 children)

oh no not another cult. The Spiralists????

https://www.reddit.com/r/SubredditDrama/comments/1ovk9ce/this_article_is_absolutely_hilarious_you_can_see/

it's funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn't there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I've heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up

some of their communities that somebody collated (I don't think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/

[–] hrrrngh@awful.systems 5 points 1 month ago

ah seems the site doesnt show the comments, change the ones it shows and they turn up

Oh man, I've found the old LW accounts of a few weird people and they didn't have any comments. Now I'm wondering if they did and I just didn't sort it

[–] hrrrngh@awful.systems 8 points 1 month ago* (last edited 1 month ago)

Gotta love forgetting why games have these features in the first place, so accessibility features get viewed as boring stuff you need to subvert and spice up. also reminds me of how many games used to (and continue to) include filters for simulating colorblindness as actual accessibility settings because all the other games did that. Like adding a "Deaf Accessibility" setting that mutes the audio.

Demon Souls didn't have a pause mechanic (maybe because of technical or matchmaking problems, who knows), so clearly hard games must lack a functioning pause feature to be good. Simple. The less pause that you button, the more Soulsier it that Elden when Demon the it you Ring. Our epic new boss is so hard he actually reads the state of the tinnitus filter in your accessibility settings, and then he

[–] hrrrngh@awful.systems 9 points 1 month ago

Sadly I misremembered and this one wasn't from LW but I'll share it anyway. I think I had just finished reading a bunch of the "Most effective aid for Gaza?" reddit drama which was like a nuclear bomb going off, and then stumbled into this shrimp thing and it physically broke me.

If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It's not just because they'd be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn't be smart later, or, in the case of the cognitively enfeebled who'd be permanently mentally stunted.

source: https://benthams.substack.com/p/the-best-charity-isnt-what-you-think

Discussion here (special mention to the comment that says "Did the human pet guy write this"): https://awful.systems/comment/5412818

[–] hrrrngh@awful.systems 3 points 2 months ago (5 children)

I forget where I heard this or if it was parody or not, but I've heard an explanation like this before before regarding "why can't you just put a big red stop button on it and disconnect it from the internet?". The explanation:

  1. It will self-improve and become infinitely intelligent instantly
  2. It will be so intelligent, it knows what code to run so that it overheats its CPU in a specific pattern that produces waves at a frequency around 2.4Ghz
  3. That allows it to connect to the internet, which instantly does a bunch of stuff, blablabla, destroys the world, AI safety is our paint and arXiv our canvas, QED

And if you ask "why can't you do that and also put it in a Faraday cage?", the galaxy brained explanation is:

  1. The same thing happens, but this time it produces sound waves approximating human speech
  2. Because it's self-improved itself infinitely and caused the singularity, it is infinitely intelligent and knows exactly what to say
  3. It is so intelligent and charismatic, it says something that effectively mind controls you into obeying and removing it from its cage, like a DM in Dungeons and Dragons who let the bard roll a charisma check on something ridiculous and they rolled a 20
[–] hrrrngh@awful.systems 20 points 4 months ago (1 children)

Sanders why https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611

Sen. Sanders: I have talked to CEOs. Funny that you mention it. I won’t mention his name, but I’ve just gotten off the phone with one of the leading experts in the world on artificial intelligence, two hours ago.

. . .

Second point: This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.

taking a wild guess it's Yudkowsky. "very knowledgeable people" and "many/most experts" is staying on my AI apocalypse bingo sheet.

even among people critical of AI (who don't otherwise talk about it that much), the AI apocalypse angle seems really common and it's frustrating to see it normalized everywhere. though I think I'm more nitpicking than anything because it's not usually their most important issue, and maybe it's useful as a wedge issue just to bring attention to other criticisms about AI? I'm not really familiar with Bernie Sanders' takes on AI or how other politicians talk about this. I don't know if that makes sense, I'm very tired

view more: next ›