lurker

joined 1 month ago
[–] lurker@awful.systems 9 points 1 day ago* (last edited 1 day ago)

Hooray…still thinking about how apparently the OpenAI and Anthropic PR teams were impressed with this doc, which is a pretty clear indicator of what it’s going to be like

the reviews on imdb have dropped a bit (from 8.5 down to 7.5) but that’ll probably change when the full thing drops

Also is it just me or does the trailer have a weirdly large views vs likes ratio? 5.6mil views with only 6.7K likes? is that the norm for documentaries?

[–] lurker@awful.systems 2 points 2 days ago

I mean there is a lot of crazy bullshit in there so I don't blame anyone for getting derailed

[–] lurker@awful.systems 24 points 2 days ago (1 children)

One of the solutions proposed — I am not kidding — is “writing scripts to automate repetitive tasks.” It’s really funny imagining a software engineer being like “woah … like automating the boring stuff, you might say?”

If I'm getting this right, they're going to cut the cost of automating everything....by automating more things?

[–] lurker@awful.systems 5 points 2 days ago

For lawsuits that actually are happening, OpenAI is getting sued by dictionaries

[–] lurker@awful.systems 7 points 2 days ago* (last edited 2 days ago) (1 children)

Microsoft MIGHT be suing OpenAI

Strong might, since nothing is set in stone yet. There have been talks where Microsoft has threatened to sue but that's it so far

[–] lurker@awful.systems 5 points 2 days ago (2 children)

Reading the article again, that definitely feels like the angle the author was going for

[–] lurker@awful.systems 13 points 3 days ago* (last edited 3 days ago) (5 children)

The Founder of Anthropic Says He Wants to Protect Humanity From AI. Just Don't Ask How. another long article about the AI craze and in particular Anthropic. A snippet that stood out to me:

"Reviewing my interview transcripts one night, I discover I’d left my recorder running when I excused myself to use the bathroom at Anthropic. On the tape, Kyle Fish, the AI researcher, and Danielle Ghiglieri, my tattooed guide, are laughing about some visitors to their headquarters the day before, what sounds like a documentary or TV crew.

“I sit right next to Trenton,” Fish says. “I went back and told him, ‘Dude, you really did something to those guys with your sunscreen stuff yesterday.’ He thought it was hilarious.”

They’re both cracking up.

Ghiglieri says Fish, too, had convincingly come off as a “different species of human,” adding: “They were very enamored with you.”

They’re inclined to cooperate with whatever project these people proposed, she says, and make everybody a star. I hadn’t heard Trenton’s sunscreen spiel yet. Only later, over lunch, would he tell me that he stopped protecting himself against skin cancer because AI was going to end the world in five years.

Crazy to me how people can so confidently predict AI doomsday, and then just keep working at an AI company

[–] lurker@awful.systems 9 points 1 week ago

I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI.

That is absolutely the reason, or at least part of it. See: Pete Hegseth Got His Happy Meal and how AGI-is-nigh doomers own-goaled themselves

[–] lurker@awful.systems 5 points 1 week ago* (last edited 1 week ago)

Reading comments cause I was bored, and had the misfortune to stumble upon this horribly formatted piece of work allegedly written by Claude

[–] lurker@awful.systems 15 points 1 week ago (1 children)

I mean after the Epstein Files you have to be either deliberately ignorant or incredibly dense to not realise the rich get off easily

[–] lurker@awful.systems 14 points 1 week ago* (last edited 1 week ago) (4 children)

the Pentagon's CTO has AI psychosis now. sighhhhhhhhh

The whole argument can just be countered with "if the Pentagon believes Claude is sentient and a danger to the military, then why make a deal with OpenAI to use ChatGPT, another LLM similar to Claude? Wouldn't that also be a danger of becoming sentient? and why are Pete Hegseth and Donald Trump planning to force Anthropic to comply after 6 months if they believe Claude shouldn't be in the military?? Why did you ask Anthropic to let you use Claude for mass surveillance and autonomous weapons if you believed it was sentient and a danger??"

It just reeks of bullshit. "uhm actually we made Anthropic a supply chain risk because Claude is actually very dangerous and not because we're doing banana republic shit to anyone who disagrees with us. we are a very responsible and safe government. please dont impeach trump."

 

Originally posted in the Stubsack, but decided to make it its own post because why not

 

this was already posted on reddit sneerclub, but I decided to crosspost it here so you guys wouldn’t miss out on Yudkowsky calling himself a genre savy character, and him taking what appears to be a shot at the Zizzians

 

originally posted in the thread for sneers not worth a whole post, then I changed my mind and decided it is worth a whole post, cause it is pretty damn important

Posted on r/HPMOR roughly one day ago

full transcript:

Epstein asked to call during a fundraiser. My notes say that I tried to explain AI alignment principles and difficulty to him (presumably in the same way I always would) and that he did not seem to be getting it very much. Others at MIRI say (I do not remember myself / have not myself checked the records) that Epstein then offered MIRI $300K; which made it worth MIRI's while to figure out whether Epstein was an actual bad guy versus random witchhunted guy, and ask if there was a reasonable path to accepting his donations causing harm; and the upshot was that MIRI decided not to take donations from him. I think/recall that it did not seem worthwhile to do a whole diligence thing about this Epstein guy before we knew whether he was offering significant funding in the first place, and then he did, and then MIRI people looked further, and then (I am told) MIRI turned him down.

Epstein threw money at quite a lot of scientists and I expect a majority of them did not have a clue. It's not standard practice among nonprofits to run diligence on donors, and in fact I don't think it should be. Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation, and this kind of scrutiny is more efficiently centralized by having professional law enforcement do it than by distributing it across thousands of nonprofits.

In 2009, MIRI (then SIAI) was a fiscal sponsor for an open-source project (that is, we extended our nonprofit status to the project, so they could accept donations on a tax-exempt basis, having determined ourselves that their purpose was a charitable one related to our mission) and they got $50K from Epstein. Nobody at SIAI noticed the name, and since it wasn't a donation aimed at SIAI itself, we did not run major-donor relations about it.

This reply has not been approved by MIRI / carefully fact-checked, it is just off the top of my own head.

 

I searched for “eugenics” on yud’s xcancel (i will never use twitter, fuck you elongated muskrat) because I was bored, got flashbanged by this gem. yud, genuinely what are you talking about

view more: next ›