corbin

joined 2 years ago
[–] corbin@awful.systems -1 points 6 hours ago

Linear no-threshold isn't under attack, but under review. The game-theoretic conclusions haven't changed: limit overall exposure, radiation is harmful, more radiation means more harm. The practical consequences of tweaking the model concern e.g. evacuation zones in case of emergency; excess deaths from radiation exposure are balanced against deaths caused by evacuation, so the choice of model determines the exact shape of evacuation zones. (I suspect that you know this but it's worth clarifying for folks who aren't doing literature reviews.)

 

A straightforward product review of two AI therapists. Things start bad and quickly get worse. Choice quip:

Oh, so now I'm being gaslit by a frakking Tamagotchi.

[–] corbin@awful.systems 6 points 1 day ago (1 children)

I don’t have any experience writing physics simulators myself…

I think that this is your best path forward. Go simulate some rigid-body physics. Simulate genetics with genetic algorithms. Simulate chemistry with Petri nets. Simulate quantum computing. Simulate randomness with random-number generators. You'll learn a lot about the limitations that arise at each step as we idealize the real world into equations that are simple enough to compute. Fundamentally, you're proposing that Boltzmann brains are plausible, and the standard physics retort (quoting Carroll 2017, Why Boltzmann brains are bad) is that they "are cognitively unstable: they cannot simultaneously be true and justifiably believed."

A lesser path would be to keep going with consciousness and neuroscience. In that case, go read Hofstadter 2007, 'I' is a strange loop to understand what it could possibly mean for a pattern to be substrate-independent.

If they’re complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it’s intelligent.

No, you're likely to suffer the ELIZA Effect. Previously, on Awful, I've explained what's going on in terms of memes. If you want to read a sci-fi story instead, I'd recommend Watts' Blindsight. You are overrating the phenomenon of intelligence.

[–] corbin@awful.systems 6 points 2 days ago

I'm going to be a little indirect and poetic here.

In Turing’s view, if a computer were to pass the Turing Test, the calculations it carried out in doing so would still constitute thought even if carried out by a clerk on a sheet of paper with no knowledge of how a teletype machine would translate them into text, or even by a distributed mass of clerks working in isolation from each other so that nothing resembling a thinking entity even exists.

Yes. In Smullyan's view, the acoustic patterns in the air would still constitute birdsong even if whistled by a human with no beak, or even by a vibrating electromagnetically-driven membrane which is located far from the data that it is playing back, so that nothing resembling a bird even exists. Or, in Aristoteles' view, the syntactic relationship between sentences would still constitute syllogism even if attributed to a long-dead philosopher, or even verified by a distributed mass of mechanical provers so that no single prover ever localizes the entirety of the modus ponens. In all cases, the pattern is the representation; the arrangement which generates the pattern is merely a substrate.

Consider the notion that thought is a biological process. It’s true that, if all of the atoms and cells comprising the organism can be mathematically modeled, a Turing Machine would then be able to simulate them. But it doesn’t follow from this that the Turing Machine would then generate thought. Consider the analogy of digestion. Sure, a Turing Machine could model every single molecule of a steak and calculate the precise ways in which it would move through and be broken down by a human digestive system. But all this could ever accomplish would be running a simulation of eating the steak. If you put an actual ribeye in front of a computer there is no amount of computational power that would allow the computer to actually eat and digest it.

Putting an actual ribeye in front of a human, there is no amount of computational power that would allow the human to actually eat and digest it, either. The act of eating can't be provoked merely by thought; there must be some sort of mechanical linkage between thoughts and the relevant parts of the body. Turing & Champernowne invented a program that plays chess and also were known (apocryphally, apparently) to play "run-around-the-house chess" or "Turing chess" which involved standing up and jogging for a lap in-between chess moves. The ability to play Turing chess is cognitively embodied but the ability to play chess is merely the ability to represent and manipulate certain patterns.

At the end of the day what defines art is the existence of intention behind it — the fact that some consciousness experienced thoughts that it subsequently tried to communicate. Without that there’s simply lines on paper, splotches of color, and noise. At the risk of tautology, meaning exists because people mean things.

Art is about the expression of memes within a medium; it is cultural propagation. Memes are not thoughts, though; the fact that some consciousness experienced and communicated memes is not a product of thought but a product of memetic evolution. The only other thing that art can carry is what carries it: the patterns which emerge from the encoding of the memes upon the medium.

[–] corbin@awful.systems 12 points 3 days ago (2 children)

He very much wants you to know that he knows that the Zizians are trans-coded and that he's okay with that, he's cool, he welcomes trans folks into Rationalism, he's totally an ally, etc. How does he phrase that, exactly?

That cult began among, and recruited from, a vulnerable subclass of a class of people who had earlier found tolerance and shelter in what calls itself the 'rationalist' community. I am not explicitly naming that class of people because the vast supermajority of them have not joined murder cults, and what other people do should not be their problem.

I mean, yes in the abstract, but would it really be so hard to say that MIRI supports trans rights? What other people do, when those other people form a majority of a hateful society, is very much a problem for the trans community! So much for status signaling.

[–] corbin@awful.systems 16 points 4 days ago (9 children)

This is a list of apostates. The idea is not to actually detail the folks who do the most damage to the cult's reputation, but to attack the few folks who were once members and left because they were no longer interested in being part of a cult. These attacks are usually motivated by emotions as much as a desire to maintain control over the rest of the cult; in all cases, the sentiment is that the apostate dared to defy leadership. Usually, attacks on apostates are backed up by some sort of enforcement mechanism, from calls for stochastic terrorism to accusations of criminality; here, there's not actually a call to do anything external, possibly because Habryka realizes that the optics are bad but more likely because Habryka doesn't really have much power beyond those places where he's already an administrator. (That said, I would encourage everybody to become aware of, say, CoS's Fair Game policy or Noisy Investigation policy to get an idea of what kinds of attacks could occur.)

There are several prominent names that aren't here. I'd guess that Habryka hasn't been meditating over this list for a long time; it's just the first few people that came to mind when he wrote this note. This is somewhat reassuring, as it suggests that he doesn't fully understand how cultural critiques of LW affect the perception of LW more broadly; he doesn't realize how many people e.g. Breadtube reaches. Also, he doesn't understand that folks like SBF and Yarvin do immense reputational damage to rationalist-adjacent projects, although he seems to understand that the main issue with Zizians is not that they are Cringe but that they have been accused of multiple violent felonies.

Not many sneers to choose from, but I think one commenter gets it right:

In other groups with I’m familiar, you would kick out people you think are actually a danger or you think they might do something that brings your group into disrepute. But otherwise, I think it’s a sign of being a cult If you kick people for not going along with the group dogma.

[–] corbin@awful.systems 3 points 5 days ago

Previously, on Awful, I wrote up what I understand to be their core belief structure. It's too bad that we're not calling them the Cyclone Emoji cult.

 

The answer is no. Seth explains why not, using neuroscience and medical knowledge as a starting point. My heart was warmed when Seth asked whether anybody present believed that current generative systems are conscious and nobody in the room clapped.

Perhaps the most interesting takeaway for me was learning that — at least in terms of what we know about neuroscience — the classic thought experiment of the neuron-replacing parasite, which incrementally replaces a brain with some non-brain substrate without interrupting any computations, is biologically infeasible. This doesn't surprise me but I hadn't heard it explained so directly before.

Seth has been quoted previously, on Awful for his critique of the current AI hype. This talk is largely in line with his other public statements.

Note that the final 10min of the video are an investigation of Seth's position by somebody else. This is merely part of presenting before a group of philosophers; they want to critique and ask questions.

 

A complete dissection of the history of the David Woodard editing scandal as told by an Oregonian Wikipedian. The video is sectioned into multiple miniature documentaries about various bastards and can be watched piece-by-piece. Too long to watch? Read the link above.

too long, didn't watch, didn't read, summarize anyway

David Woodard is an ethnonationalist white supremacist whose artistic career has led to an intersection with a remarkable slice of cult leaders and serial killers throughout the past half-century. Each featured bastard has some sort of relationship to Woodard, revealing an entire facet of American Nazism which runs in parallel to Christian TREACLES, passed down through psychedelia. occult mysticism, and non-Christian cults of capitalism.

[–] corbin@awful.systems 4 points 3 weeks ago (1 children)
[–] corbin@awful.systems 9 points 3 weeks ago (1 children)

Hey now, at least the bowl of salvia has a theme, predictable effects, immersive sensations, and the ability to make people feel emotions.

[–] corbin@awful.systems 2 points 4 weeks ago (1 children)

Thanks! You're getting better with your insults; that's a big step up from your trite classics like "sweet summer child". As long as you're here and not reading, let's not read from my third link:

As a former musician, I know that there is no way to train a modern musician, or any other modern artist, without heavy amounts of copyright infringement. Copying pages at the library, copying CDs for practice, taking photos of sculptures and paintings, examining architectural blueprints of real buildings. The system simultaneously expects us to be well-cultured, and to not own our culture. I suggest that, of those two, the former is important and the latter is yet another attempt to coerce and control people via subversion of the public domain.

Maybe you're a little busy with your Biblical work-or-starve mindset, but I encourage you to think about why we even have copyright if it must be flaunted in order to become a skilled artist. It's worth knowing that musicians don't expect to make a living from our craft; we expect to work a day job too.

[–] corbin@awful.systems 4 points 4 weeks ago (8 children)

Previously, on Awful:

[Copyright i]s not for you who love to make art and prize it for its cultural impact and expressive power, but for folks who want to trade art for money.

Quoting Anarchism Triumphant, an extended sneer against copyright:

I wanted to point out something else: that our world consists increasingly of nothing but large numbers (also known as bitstreams), and that - for reasons having nothing to do with emergent properties of the numbers themselves - the legal system is presently committed to treating similar numbers radically differently. No one can tell, simply by looking at a number that is 100 million digits long, whether that number is subject to patent, copyright, or trade secret protection, or indeed whether it is "owned" by anyone at all. So the legal system we have - blessed as we are by its consequences if we are copyright teachers, Congressmen, Gucci-gulchers or Big Rupert himself - is compelled to treat indistinguishable things in unlike ways.

Or more politely, previously, on Lobsters:

Another big problem is that it's not at all clear whether information, in the information-theoretic sense, is a medium through which expressive works can be created; that is, it's not clear whether bits qualify for copyright. Certainly, all around the world, legal systems have assumed that bits are a medium. But perhaps bits have no color. Perhaps homomorphic encryption implies that color is unmeasurable. It is well-accepted even to legal scholars that abstract systems and mathematics aren't patentable, although the application of this to computers clearly shows that the legal folks involved don't understand information theory well enough.

Were we anti-copyright leftists really so invisible before, or have you been assuming that No True Leftist would be anti-copyright?

[–] corbin@awful.systems 10 points 4 weeks ago

Closely related is a thought I had after responding to yet another paper that says hallucinations can be fixed:

I'm starting to suspect that mathematics is not an emergent skill of language models. Formally, given a fixed set of hard mathematical questions, it doesn't appear that increasing training data necessarily improves the model's ability to generate valid proofs answering those questions. There could be a sharp divide between memetically-trained models which only know cultural concepts and models like Gödel machines or genetic evolution which easily generate proofs but have no cultural awareness whatsoever.

[–] corbin@awful.systems 4 points 4 weeks ago

"Not Winston Smith?" So, O'Brien?

 

Cross-posting a good overview of how propaganda and public relations intersect with social media. Thanks @Soatok@pawb.social for writing this up!

 

Tired of going to Scott "Other" Aaronson's blog to find out what's currently known about the busy beaver game? I maintain a community website that has summaries for the known numbers in Busy Beaver research, the Busy Beaver Gauge.

I started this site last year because I was worried that Other Scott was excluding some research and not doing a great job of sharing links and history. For example, when it comes to Turing machines implementing the Goldbach conjecture, Other Scott gives O'Rear's 2016 result but not the other two confirmed improvements in the same year, nor the recent 2024 work by Leng.

Concretely, here's what I offer that Other Scott doesn't:

  • A clear definition of which problems are useful to study
  • Other languages besides Turing machines: binary lambda calculus and brainfuck
  • A plan for how to expand the Gauge as a living book: more problems, more languages and machines
  • The content itself is available on GitHub for contributions and reuse under CC-BY-NC-SA
  • All tables are machine-computed when possible to reduce the risk of handwritten typos in (large) numbers
  • Fearless interlinking with community wikis and exporting of knowledge rather than a complexity-zoo-style silo
  • Acknowledgement that e.g. Firoozbakht is part of the mathematical community

I accept PRs, although most folks ping me on IRC (korvo on Libera Chat, try #esolangs) and I'm fairly decent at keeping up on the news once it escapes Discord. Also, you (yes, you!) can probably learn how to write programs that attempt to solve these problems, and I'll credit you if your attempt is short or novel.

 

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

 

This is a rough excerpt from a quintet of essays I've intended to write for a few years and am just now getting around to drafting. Let me know if more from this series would be okay to share; the full topic is:

Power Relations

  1. Category of Responsibilities
  2. The Reputation Problem
  3. Greater Internet Fuckwad Theory (GIFT), Special Internet Fuckwad Theory (SIFT), & Special Fuckwittery
  4. System 3 & Unified Fuckwittery
  5. Algorithmic Courtesy

This would clarify and expand upon ideas that I've stated here and also on Lobsters (Reputation Problem, System 3 (this post!)) The main idea is to understand how folks exchange power and responsibilities.

As always, I did not use any generative language-modeling tools. I did use vim's spell-checker.


Humans are not rational actors according to any economic theory of the past few centuries. Rather than admit that economics might be flawed, psychologists have explored a series of models wherein humans have at least two modes of thinking: a natural mode and an economically-rational mode. The latest of these is the amorphous concept of System 1 and System 2; System 1 is an older system that humans share with a wide clade of distant relatives and System 2 is a more recently-developed system that evolved for humans specifically. This position does not agree with evolutionary theories of the human brain and should be viewed with extreme skepticism.

When pressed, adherents will quickly retreat to a simpler position. They will argue that there are two modes of physical signaling. First, there are external stimuli, including light, food, hormones, and the traditional senses. For example, a lack of nutrition in blood and a preparedness of the intestines for food will trigger a release of the hormone ghrelin from the stomach, triggering the vagus nerve to incorporate a signal of hunger into the brain's conceptual sensorium. Thus, when somebody says that they are hungry, they are engaged by a System 1 process. Some elements of System 1 are validated by this setup, particularly the claims that System 1 is autonomous, automatic, uninterruptible, and tied to organs which evolved before the neocortex. System 2 is everything else, particularly rumination and introspection; by excluded middle, System 2 also is how most ordinary cognitive processes would be classified.

We can do better than that. After all, if System 2 is supposed to host all of the economic rationality, then why do people spend so much time thinking and still come to irrational conclusions? Also, in popular-science accounts of System 1, why aren't emotions and actions completely aligned with hormones and sensory input? Perhaps there is a third system whose processes are confused with System 1 and System 2 somehow.

So, let's consider System 3. Reasoning in System 3 is driven by memes: units of cultural expression which derive semantics via chunking and associative composition. This is not how System 1 works, given that operant conditioning works in non-humans but priming doesn't reliably replicate. The contrast with System 2 is more nebulous since System 2 does not have a clear boundary, but a central idea is that System 2 is not about the associations between chunks as much as the computation encoded by the processing of the chunks. A System 2 process applies axioms, rules, and reasoning; a System 3 process is strictly associative.

I'm giving away my best example here because I want you to be convinced. First, consider this scenario: a car crash has just happened outside! Bodies are piled up! We're still pulling bodies from the wreckage. Fifty-seven people are confirmed dead and over two hundred are injured. Stop and think: how does System 1 react to this? What emotions are activated? How does System 2 react to this? What conclusions might be drawn? What questions might be asked to clarify understanding?

Now, let's learn about System 3. Click, please!Update to the scenario: we have a complete tally of casualties. We have two hundred eleven injuries and sixty-nine dead.

When reading that sentence, many Anglophones and Francophones carry an ancient meme, first attested in the 1700s, which causes them to react in a way that wasn't congruent with their previous expressions of System 1 and System 2, despite the scenario not really changing much at all. A particular syntactic detail was memetically associated to another hunk of syntax. They will also shrug off the experience rather than considering the possibility that they might be memetically influenced. This is the experience of System 3: automatic, associative, and fast like System 1; but quickly rationalizing, smoothed by left-brain interpretation, and conjugated for the context at hand like System 2.

An important class of System 3 memes are the thought-terminating clichés (TTCs), which interrupt social contexts with a rhetorical escape that provides easy victory. Another important class are various moral rules, from those governing interpersonal relations to those computing arithmetic. A sufficiently rich memeplex can permanently ensnare a person's mind by replacing their reasoning tools; since people have trouble distinguishing between System 2 and System 3, they have trouble distinguishing between genuine syllogism and TTCs which support pseudo-logical reasoning.

We can also refine System 1 further. When we talk of training a human, we ought to distinguish between repetitive muscle movements and operant conditioning, even though both concepts are founded upon "wire together, fire together." In the former, we are creating so-called "muscle memory" by entraining neurons to rapidly simulate System 2 movements; by following the principle "slow is smooth, smooth is fast", System 2 can chunk its outputs to muscles in a way analogous to the chunking of inputs in the visual cortex, and wire those inputs and outputs together too, coordinating the eye and hand. A particularly crisp example is given by the arcuate fasciculus connecting Broca's area and Wernicke's area, coordinating the decoding and encoding of speech. In contrast, in the latter, we are creating a "conditioned response" or "post-hypnotic suggestion" by attaching System 2 memory recall to System 1 signals, such that when the signal activates, the attached memory will also activate. Over long periods of time, such responses can wire System 1 to System 1, creating many cross-organ behaviors which are mediated by the nervous system.

This is enough to explain what I think is justifiably called "unified fuckwittery," but first I need to make one aside. Folks get creeped out by neuroscience. That's okay! You don't need to think about brains much here. The main point that I want to rigorously make and defend is that there are roughly three reasons that somebody can lose their temper, break their focus, or generally take themselves out of a situation, losing the colloquial "flow state." I'm going to call this situation "tilt" and the human suffering it is "tilted." The three ways of being tilted are to have an emotional response to a change in body chemistry (System 1), to act emotional as a conclusion of some inner reasoning (System 2), or to act out a recently-activated meme which happens to appear like an emotional response (System 3). No more brain talk.

I'm making a second aside for a persistent cultural issue that probably is not going away. About a century ago, philosophers and computer scientists asked about the "Turing test": can a computer program imitate a human so well that another human cannot distinguish between humans and imitations? About a half-century ago, the answer was the surprising "ELIZA effect": relatively simple computer programs can not only imitate humans well enough to pass a Turing test, but humans prefer the imitations to each other. Put in more biological terms, such programs are "supernormal stimuli"; they appear "more human than human." Also, because such programs only have a finite history, they can only generate long interactions in real time by being "memoryless" or "Markov", which means that the upcoming parts of an interaction are wholly determined by a probability distribution of the prior parts, each of which are associated to a possible future. Since programs don't have System 1 or System 2, and these programs only emit learned associations, I think it's fair to characterize them as simulating System 3 at best. On one hand, this is somewhat worrying; humans not only cannot tell the difference between a human and System 3 alone, but prefer System 3 alone. On the other hand, I could see a silver lining once humans start to understand how much of their surrounding civilization is an associative fiction. We'll return to this later.

 

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

 

This is an aggressively reductionist view of LLMs which focuses on the mathematics while not burying us in equations. Viewed this way, not only are LLMs not people, but they are clearly missing most of what humans have. Choice sneer:

To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation.

 

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

 

In today's episode, Yud tries to predict the future of computer science.

view more: next ›