SneerClub

1148 readers
76 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
126
 
 

The Future of Sovereign AI

We still don’t know just how important and disruptive artificial intelligence will be, but one thing seems clear: the power of AI should not remained cordoned off by centralized companies. Our panelists—Cody Wilson of Defense Distributed, Native Planet’s ~mopfel-winrux, Tlon’s Lukas Buhler, along with @mogmachine from Bittensor and David Capone from Harmless AI—are the perfect team to explore the possibilities unlocked by more sovereign, decentralized, and open AI.

[A bitcoiner, an ancap, a 3-D gun printer, an alt-righter, the founder of Hatreon and a convicted kiddle fucker walk into a bar. The barman picks up a baseball bat and says "get the fuck out of my bar, Cody."]

Cancelling the Culture Industry

In a world of moral totalitarianism, sometimes freedom looks like a short story about sex tourism in the Philippines. In this panel, author Sam Frank hosts MRB editor in chief Noah Kumin, romance writer Delicious Tacos, sex detective Magdalene Taylor and frog champion Lomez of Passage Press. Join them for a freewheeling discussion of saying whatever they want while evading the digital hall monitors.#

[not being able to live within five hundred feet of a school is a small price to pay for true freedom]

Securing Urbit

How do we make Urbit secure? And what does a secure Urbit look like? The great promise of Urbit has always been that it can provide a sovereign computing platform for the individual—a means by which to do everything you would want to do on a computer without giving up your data. For that dream to be fulfilled, Urbit should be as secure as your crypto hardware wallet—perhaps moreso. Moderated by Rikard Hjort, Urbit experts Logan Allen, and Joe Bryan discuss with Urbit fan and cybersecurity expert Ryan Lackey.

[as secure as a crypto hardware wallet, you say]

Rebooting the Arts

The culture war is over—Culture lost. Now it’s a race to build a new one. Media whisperer Ryan Lambert leads a conversation with Play Nice founder/impresario Hadrian Belove. trend forecaster Sean Monahan, and controversial art-doc collective Kirac. They discuss how to win the culture race, and create a new arts ecosystem out of the rubble.

[the answer is to get Peter Thiel to try to magic up Dimes Square out of nothing, isn't it?]

How to Fund a New World

Cosimo de Medici persuaded Benvenuto Cellini, the Florentine sculptor, to enter his service by writing him a letter which concluded, 'Come, I will choke you with gold.' Join UF Director of Markets Andrew Kim as he discusses how to get more gold onto Urbit with Jake Brukhman of Coinfund, Jae Yang of Tacen, @BacktheBunny from RabbitX and Evan Fisher of Portal VC.

[the answer's still Thiel, isn't it?]

127
 
 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

128
 
 

First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow 'rationalists' are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

The flaw here is that there's 8 billion people alive right now, and we don't actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying "fuck em". This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can't solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

So I was wondering what the people here generally think. There are "boomer" forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

I also have noticed that the whole rationalist schtick of "what is your probability" seems like asking for "joint probabilities", aka smoke a joint and give a probability.

Here's my questions:

  1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

  2. Do you consider it likely, before 2040, those domains will include robotics

  3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

  4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

  5. Is AI system design an issue. I hate to say "alignment", because I think that's hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

*"epistemic status": I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas..

129
130
 
 

It will not surprise you at all to find that they protest just a tad too much.

See also: https://www.lesswrong.com/posts/ZjXtjRQaD2b4PAser/a-hill-of-validity-in-defense-of-meaning

131