lagrangeinterpolator

joined 11 months ago

Unfortunately, our problem right now is not Donna the below-average Democrat but Donald the fascist. And when it comes to fascists I do not ask if they are above or below average.

[–] lagrangeinterpolator@awful.systems 15 points 5 days ago* (last edited 5 days ago) (3 children)

The fire code thing really is an excellent example of LessWrong Brain. Fire truck drivers insist on needlessly large trucks (no citation) which makes roads 30% wider than they would otherwise be (no citation) which has "probably" "non-trivially" contributed to larger cars (no citation) leading to enough additional road fatalities to cancel out the lives saved by stricter fire codes (no citation).

The LessWrong Brain argument starts with a deliberately contrarian conclusion and proves it with a Rube Goldberg chain of logical syllogisms. Of course, citations are strictly optional, and they are free to misinterpret them as they see fit. The only real standard of each claim is "looks good to me", but you are supposed to be impressed that they managed to string a dozen of them together to reveal some shocking, deep truth of the world that nobody else knows about. The AI 2027 nonsense is an infamous example of this.

He uses the word "fermi" which is cult jargon based on Fermi estimation, a.k.a. guessing shit with back-of-the-envelope calculations. Not exactly what you want if you want to convince people to reform fire codes, especially if you have zero citations for anything.

I guess people just aren't rational enough, and the only reason the fire codes are so irrational is because people are emotional about fire codes. Firefighters are apparently revered as heroes, when it is the LWers who should be the heroes. After all, firefighters merely save people from fires, while LWers buy multimillion dollar mansions to talk about saving quadrillions of hypothetical people from hypothetical basilisks!

[–] lagrangeinterpolator@awful.systems 10 points 6 days ago (1 children)

It's fine, spyware is only a risk when it's bad people's spyware. It's totally fine when it's Anthropic™-approved spyware!

As for Mythos catching things, maybe they should have used Mythos on their very own Claude Code considering that it has hilariously obvious security exploits, such as this one which inserts an arbitrary string into a shell command. Actually, never mind I don't see anything wrong here, maybe we should burn another $20k in electricity running Mythos on it again to find out.

[–] lagrangeinterpolator@awful.systems 9 points 1 week ago* (last edited 1 week ago) (1 children)

In basically every case in history where people decided to kill a bad king, there was a period of chaos and violence that followed it. The killing of Charles I happened during the English Civil War, and the killing of Louis XVI happened during the French Revolution. This has happened many times in Chinese history, with the fall of an imperial dynasty leading to several decades of civil war (most recently in the early 1900s). But I guess if you have a big clever brain with big clever thoughts, you don't need to look at history.

If the only way to get rid of a bad king is to kill him, he will do anything he can to defend his power, including using as much violence as necessary. (People generally do not like being killed.) Even if you successfully get rid of him, good luck establishing a proper government afterwards with all the violence you've caused. And who knows if the new king is gonna be better or worse? A better system would instead have a mechanism that replaces officials on a regular basis, say every few years, and ensure that these replacements are peaceful. Oh wait, that's liberal democracy. If we do something boring like support democracy, how will people ever think of us as special, clever thinkers with bold, contrarian thoughts?

It’s still One Person. A mortal, fleshy person. Their defence is that they’re inoffensive, things are stable, nothing is directly their fault and people are bound by law and oath.

Bro, your system involves giving all the power to one person. You cannot then say they have no responsibility or that they're "inoffensive" when they abuse it.

I've seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.

Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I'm not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?

[–] lagrangeinterpolator@awful.systems 9 points 2 weeks ago (1 children)

I think the main difference here is that breaking RSA now just requires scaling up existing approaches, while breaking LWE or anything like that would need a major conceptual breakthrough. The former possibility is much more likely, and in any case, cryptographers are the most paranoid people on the planet for a reason.

Unfortunately, one can never be sure about much in cryptography until P vs NP is solved (and then some).

(Of course, just because some people say that scaling up is enough doesn't mean it's actually true. For breaking RSA, we know have Shor's algorithm, while the only evidence AI bros have from superintelligence coming from scaling is "trust me bro".)

[–] lagrangeinterpolator@awful.systems 9 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

This is what happens when your worldview is based on anime.

(A lot of anime has heavy themes, but most people understand that it's not real life, just like all such art. Unlike Yud, most people's worldviews on coding and math are based on actual coding and math.)

We can see that one 9 of availability is 90% = 0.9, two 9s is 99% = 0.99, three 9s is 99.9% = 0.999, etc. In general, for positive integers n, n 9s of availability is 1 - (1/10)^n, and we can extrapolate that to non-integer values of n. The value γ needed for 87.5% availability is the solution to 1 - (1/10)^γ = 7/8, or γ = log_10(8) = 0.903089987. γ is transcendental by Gelfond-Schneider (see this for a reference proof).

Right now, Sora is at zero 9s of availability.

[–] lagrangeinterpolator@awful.systems 9 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

By far the dumbest "feature" in the codebase is this thing called "Buddy" (described in a few places such as here). Honestly, I don't really know what it's for or what the point is.

BUDDY - A Tamagotchi Inside Your Terminal

I am not making this up.

Claude Code has a full Tamagotchi-style companion pet system called "Buddy." A deterministic gacha system with species rarity, shiny variants, procedurally generated stats, and a soul description written by Claude on first hatch like OpenClaw.

...

On top of that, there's a 1% shiny chance completely independent of rarity. So a Shiny Legendary Nebulynx has a 0.01% chance of being rolled. Dang.

Great, so they were planning on a gacha system where you can get an ASCII virtual pet that, uhh, occasionally makes comments? Truly a serious feature for a serious tool for the serious discipline of software engineering. Imagine if IntelliJ decided to pull this bullshit.

But also, Claude Code is leaning hard into gambling addiction — the “Hooked” model. You reward the user with an intermittent, variable reward. This keeps them coming back in the hope of the big win. And it turns them into gambling addicts.

The Onion could not have come up with a better way to illustrate this very point.

Good luck telling the promptfondlers that LLMs are only useful for entertainment and not for any useful work.

I'm sure these English instructions work because they feel like they work. Look, these LLMs feel really great for coding. If they don't work, that's because you didn't pay $200/month for the pro version and you didn't put enough boldface and all-caps words in the prompt. Also, I really feel like these homeopathic sugar pills cured my cold. I got better after I started taking them!

No joke, I watched a talk once where some people used an LLM to model how certain users would behave in their scenario given their socioeconomic backgrounds. But they had a slight problem, which was that LLMs are nondeterministic and would of course often give different answers when prompted twice. Their solution was to literally use an automated tool that would try a bunch of different prompts until they happened to get one that would give consistent answers (at least on their dataset). I would call this the xkcd green jelly bean effect, but I guess if you call it "finetuning" then suddenly it sounds very proper and serious. (The cherry on top was that they never actually evaluated the output of the LLM, e.g. by seeing how consistent it was with actual user responses. They just had an LLM generate fiction and called it a day.)

[–] lagrangeinterpolator@awful.systems 7 points 1 month ago (5 children)

AI seems good at purple prose and metaphors that don't exactly make sense. No, I do not give a fuck about the "triangle of calm" when it comes to, of all things, the narrator taking off her shoes. No, I am not interested in how long the narrator sets the timer on the microwave when she makes literally the blandest meal of all time.

Now I'm sure the techbros truly think this is good "literary" writing. After all, they only care that the writing sounds flowery, because they seem to be very good at missing the actual meaning of everything. I remember Saltman saying that the movie Oppenheimer needed to be more optimistic to inspire more kids to become physicists (while also saying that The Social Network did that for startup founders).

view more: next ›