YourNetworkIsHaunted

joined 2 years ago

I do wonder how much of the disconnect is in whgo gets considered part of the rich and powerful. Like, a lot of that 30% probably think specifically of liberal academics, celebrities, Democratic politicians, etc. and exclude or excuse people like Elon and Trump and whoever of his friends isn't currently the scapegoat for why he isn't ushering in the promised glorious reformation.

I mean I guess given how the current guy took a chainsaw to American soft power, industrial capacity, economic prospects, and so on I guess our wildly over funded military is probably the only comparative advantage we unambiguously hold onto.

It's also a trend that I don't see stopping without a major structural change. I don't think there's a point at which they're going to say "we've cut enough corners and are going to stop risking stability and service degradation." The principal structure driving the economy, especially in the tech sector, is organized around looking for new corners to cut and insulating the people who make those choices from accountability for their actual consequences.

[–] YourNetworkIsHaunted@awful.systems 8 points 2 days ago (1 children)

It feels almost like Anthropic is trying to make this a marketing opportunity by reaffirming their mostly-illusory ethical stances. That was their original pitch against openAI, and this puts them rather than Saltman back at the center of the ai hype news cycle.

[–] YourNetworkIsHaunted@awful.systems 6 points 2 days ago (1 children)

It's theoretically possible to keep them separate, but I would assume in this case that it's evidence that regardless of intentions CFAR and lightcone are sufficiently closely linked to be basically the same organization. I mean, if there's not a separate legal entity then I would assume anything involving money is going to require the same person or persons to sign off on the transaction, regardless of what the board looks like.

[–] YourNetworkIsHaunted@awful.systems 6 points 2 days ago (1 children)

Somehow I had never found that dragon army retrospective before and had the fascinating experience of wanting to explain to someone that "no, what you're describing is actually a cult. Like, you're describing being a cult leader." Which is usually not the person to whom the cult dynamic needs to be identified and explained.

I mean it's not too far off from the standard color revolution conspiracy theories where nefarious American intelligence agents and NGOs are working towards regime change and civil strife across the world in order to advance their sinister ideology. But where the "classical" color revolution conspiracy serves to undermine anticommunist movements in Eastern Europe surrounding the fall of the Soviet Union by positioning them as patsies or victims of the CIA, this newer variant that Moldbug is working with is trying to discredit American domestic anti-imperial/anticolonial/antifascist sentiments by positioning them as puppeteers of oppressive foreign regimes. Kind of an uno reverse card being played on the original story, but one that fits with how the American right conceptualizes itself and its domestic opposition.

[–] YourNetworkIsHaunted@awful.systems 13 points 2 days ago (5 children)

FT reports from Amazon insiders that they're investigating the role AI-assisted development has played in a spate of recent issues across both the store and AWS.

FT also links to several previous stories they've reported on related issues, and I haven't had the time to breach the paywalls to read further, but the line that caught my eye was this:

The FT previously reported multiple Amazon engineers said their business units had to deal with a higher number of “Sev2s” — incidents requiring a rapid response to avoid product outages — each day as a result of job cuts.

To be honest, this is why I'm skeptical of the argument that the AI-linked job losses are a complete fabrication. Not because the systems are actually there to directly replace the lost workers, but because the decision-makers at these companies seem to legitimately believe that these new AI tools will let their remaining workforce cover any gaps left by the layoffs they wanted to do anyways. It sounds like Amazon is starting to feel the inverse relationship between efficiency and stability, and I expect it's only a matter of time before the wider economy starts to feel it too. Whether the owning class recognizes what's happening is, of course, a different story.

[–] YourNetworkIsHaunted@awful.systems 13 points 1 week ago (3 children)

Thank you for providing some actual domain experience to ground my idle ramblings.

I wonder if part of the reason why so many high profile intellectuals in some of these fields are so prone to getting sniped by the confabulatron is an unwillingness to acknowledge (either publicly or in their own heart) that "random bullshit go" is actually a very useful strategy. It reminds me of the way that writers will talk about the value of just getting words on the page because it's easier to replace them with better words than to create perfection ex nihilo, or the rubber duck method of troubleshooting where just stepping through the problem out loud forces you to organize your thoughts in a way that can make the solution more readily apparent. It seems like at least some kinds of research are also this kind of process of analysis and iteration as much as if not more than raw creation and insight.

I have never met Donald Knuth, and don't mean to impugn his character here, even as I'm basically asking if he's too conceited to properly understand what an LLM is, but I think of how people talk about science and scientists and the way it gets romanticized (see also Iris Merideth's excellent piece on "warrior culture" in software development) and it just doesn't fit a field that can see meaningful progress from throwing shit at the wall to see what sticks. A lot of the discourse around art and artists is more willing to acknowledge this element of the creative process, and that might explain their greater ability and willingness to see the bullshit faucet for what it is. Maybe because science and engineering have a stricter and more objective pass/fail criteria (you can argue about code quality just as much as the quality of a painting, but unlike a painting either the program runs or it doesn't. Visual art doesn't generally have to worry about a BSOD) there isn't the same openness to acknowledge that the affirmative results you get from an LLM are still just random bullshit. I can imagine the argument being: "The things we're doing are very prestigious and require great intelligence and other things that offer prestige and cultural capital. If 'random bullshit go' is often a key part of the process then maybe it doesn't need as much intelligence and doesn't deserve as much prestige. Therefore if this new tool can be at all useful in supplementing or replicating part of our process it must be using intelligence and maybe it deserves some of the same prestige that we have."

He is altering the deal. Pray he does not alter it further. These are definitely the good guys, right?

[–] YourNetworkIsHaunted@awful.systems 11 points 1 week ago (7 children)

Even in Knuth's account it sounds like the LLM contribution was less in solving the problem and more in throwing out random BS that looked vaguely like different techniques were being applied until it spat out something that Knuth and his collaborator were able to recognize as a promising avenue for actual work.

His bud Filip Stappers rolled in to help solve an open digraph problem Knuth was working on. Stappers fed the decomposition problem to Claude Opus 4.6 cold. Claude ran 31 explorations over about an hour: brute force (too slow), serpentine patterns, fiber decompositions, simulated annealing. At exploration 25 it told itself “SA can find solutions but cannot give a general construction. Need pure math.” At exploration 30 it noticed a structural pattern in an earlier solution. Exploration 31 produced a working construction.

I am not a mathematician or computer scientist and so will not claim to know exactly what this is describing and how it compares to the normal process for investigating this kind of problem. However, the fact that it produced 4 approaches over 31 attempts seems more consistent with randomly throwing out something that looks like a solution rather than actually thinking through the process of each one. In a creative exploration like this where you expect most approaches to be dead ends rather than produce a working structure maybe the LLM is providing something valuable by generating vaguely work-shaped outputs that can inspire an actual mind to create the actual answer.

Filip had to restart the session after random errors, had to keep reminding Claude to document its progress. The solution only covers one type of solution, when Claude tried to continue another way, it “seemed to get stuck” and eventually couldn’t run its own programs correctly.

The idea that it's ultimately spitting out random answer-shaped nonsense also follows from the amount of babysitting that was required from Filip to keep it actually producing anything useful. I don't doubt that it's more efficient than I would be at producing random sequences of work-shaped slop and redirecting or retrying in response to a new "please actually do this" prompt, but of the two of us only one is demonstrating actual intelligence and moving towards being able to work independently. Compared to an undergrad or myself I don't doubt that Claude has a faster iteration time for each of those attempts, but that's not even in the same zip code as actually thinking through the problem, and if anything serves as a strong counterexample to the doomer critihype about the expanding capabilities of these systems. This kind of high-level academic work may be a case where this kind of random slop is actually useful, but that's an incredibly niche area and does not do nearly as much as Knuth seems to think it does in terms of justifying the incredible cost of these systems. If anything the narrative that "AI solved the problem" is giving Anthropic credit for the work that Knuth and Stapprrs were putting into actually sifting through the stream of slop identifying anything useful. Maybe babysitting the slop sluice is more satisfying or faster than going down every blind alley on your own, but you're still the one sitting in the river with a pan, and pretending the river is somehow pulling the gold out of itself is just damn foolish.

I mean, I can understand the argument that Anthropic at least maintained a fig leaf of ethics, but notably based on Saltman's statements OpenAI does still feel the obligation to maintain those optics, they're just not nearly as credible at doing so.

 

Apparently we get a shout-out? Sharing this brings me no joy, and I am sorry for inflicting it upon you.

 

I don't have much to add here, but I know when she started writing about the specifics of what Democrats are worried about being targeted for their "political views" my mind immediately jumped to members of my family who are gender non-conforming or trans. Of course, the more specific you get about any of those concerns the easier it is to see that crypto doesn't actually solve the problem and in fact makes it much worse.

view more: next ›