YourNetworkIsHaunted

joined 2 years ago
[–] YourNetworkIsHaunted@awful.systems 8 points 14 hours ago* (last edited 14 hours ago) (3 children)

So two thoughts:

  1. Per Saltman's comments the improvised incendiary bounced off the side of the house rather than breaking and spreading the gas on the house proper. Apparently if you want the bottle to break the way you intend you gotta really just whang that thing because glass bottles are sturdier than you'd think.

  2. One thing I find ironic about his referencing the New Yorker article on him was that part of my takeaway from that article was how mundane he is, individually. Like, he's a snake, but not in any way that isn't pretty standard once you start getting that level of wealth and power. He credibly pretended to be a proper AI cultist for the critihype, and then as the rubber started hitting the road he pivoted towards the direction that gave him and the company more money, even if it meant sacrificing the values that it turns out a lot of other people really cared about (however dumb I might think they are). That's shitty, but it's shitty in the most boring way that so many things are in the rot economy, and it's not like even if they had managed to kill Altman himself there wouldn't be another bunch of enterprising sociopaths ready to move into the same position. That profile is one of the strongest pieces of evidence why even if you are a hardcore AI doom cultist you shouldn't focus your ire on the man himself, because he's just not that special.

The fact that Bitcoin bros are so cooked that there's probably a moderately stable price floor at $69,420 because of people deciding on that number for the memes...

True. I will say that the shitty infosec teams are probably being hit less hard than the SMEs they offloaded their jobs onto, because from their perspective it doesn't actually matter whether it's f5 support engineer or a chatbot that tells them the answer; either way they've successfully offloaded the task of validating security onto another entity that can make up for their shortcomings with a combination of accuracy and authority. Nobody is going to get fired for not fixing a bug that the vendor SME told them wasn't actually an issue for them, effectively. And when the org has been pushing AI as hard as so many of them have its pretty easy to throw the chatbot under the same bus and expect the bus to stop instead.

Yeah, they lost me at the middle managers bit too. In my experience your manager is probably the one pushing the metrics to show their team's contributions to the knowledge base that is feeding into the AI model that's replacing them. They're already creatures of the bureaucracy and are more likely to try and fight each other over the few remaining roles that will exist after the majority of their teams are replaced with the confabulatron, rather than be concerned about their own replacements. After all, their job stops existing because their team got downsized, but their time in that job may be dependent on their enthusiastic participation in the process that leads there.

I don't disagree about the massive costs necessarily associated with thia industry. Even the smaller and lighter models she mentions only exist because of the massive fuckers. At the same time, I think those arguments are for the realm of public policy more than individual choice to use chatbots or not. We've talked at length here over the last year or so about how the economics of the bubble are driven largely by a broken B2B SaaS pipeline that separates purchasing decisions from actually having to use the products and by an investment capital sector desperately trying to recapture the glory days of the pre-2008 omnibubble and throwing obscene amounts of money at anything with the right narrative regardless of the numbers. I feel like that keeps happening regardless of how many individual users fall for the hype and make it part of their normal workflows.

I feel like the analogy to the drug trade is still pretty relevant given the violence and predation that the black market pretty much inevitably attracts and sustains. Like, maybe you know a guy who has his own grow op or whatever, but cocaine and heroin money is going through the cartels at some point in the chain and they're going to use some portion of it for bullets that end up in some journalist's kids or something. The downstream harms are massive even if the drug industry could theoretically avoid them in ways the AI industry can't, but any given individual user's contribution to them is incredibly minor and given the addictive and self-destructive nature of the product it's both more humane and more effective to treat them as a victim of a broken world that (falsely) offered this as a step up. While I don't think we should allow slop to invest every forum any more than addicts should be allowed to shoot up on every corner, I think that if shaming makes people less likely to acknowledge that they're going down a dead-end road and reach out to their communities and support networks for help addressing the root of what drove them to these maladaptive antisolutions in the first place then shaming is making things worse, not better.

Also as the father of a small child I can unfortunately say from recent personal experience that shaming, be it public or private, is far less effective as a means of motivating behavioral change than we want it to be, even for things as basic as not shitting on the goddamn lawn.

[–] YourNetworkIsHaunted@awful.systems 10 points 2 days ago (5 children)

Found an interesting take on YouTube, of all places. Her argument can be summarized (with high compression losses) as "AI companies and technologies are bad for basically all the reasons that non-cultist critics say, but trying to shame and argue people out of using them entirely is less effective than treating them as a normal tool with limitations and teaching people how to limit the harm." She makes the analogy to drug policy.

I think she makes a very compelling argument, and I'm still digesting it a bit because I definitely had the knee-jerk rejection as an insider shill, but especially towards the end as she talks about how the AI industry targets low-literacy users as ideal customers (because the more you know about it the less you're likely to actually use them) I found myself agreeing more than not. I do wish she had addressed the dangers of cognitive offloading more, since being mindful of which tasks you're letting the computer do for you is pretty significant part of minimizing those harms, especially for students and some professionals who face a strong incentive to just coast by on slop if they can get away with it.

[–] YourNetworkIsHaunted@awful.systems 5 points 2 days ago (1 children)

I can't validate any of the internal stuff, but the attitude of layering manual solutions and mitigation scripts on top of bad design choices and praying you could keep building the next bit of the bridge as the last one collapsed underneath you would explain a lot of experiences I had supporting systems running on Azure. The level of weird "Azure just does that sometimes" cases and the lack of ability for their support to actually provide insight was incredibly frustrating. I think I probably ended up providing a couple of automatic recovery scripts for people to use inside their F5 guests because we never could find an actual explanation for the errors they were getting, and the node issues they describe could have explained the bursts of Azure cases that would come in some days.

[–] YourNetworkIsHaunted@awful.systems 7 points 3 days ago* (last edited 3 days ago) (3 children)

XCancel link for those of us sick of being badgered to sign up/in

On a more productive note, this feels likely to be tied in with the usual issues of AI sycophancy re: false positive rate. If you ask the model to tell you about security vulnerabilities, it's never going to tell you there aren't any, any more than existing scanners will. When I worked for F5 it was not uncommon to have to go down a list of vulnerabilities that someone's scanner turned out and figure out whether they were actually something that needed mitigation that could be applied on our box, something that needed to be configured somewhere else in the network (usually on their actual servers) or (most commonly) a false positive, e.g. "your software version would be vulnerable here, which is why it flagged, but you don't have the relevant module activated and if an attacker is able to modify your system to enable it you're already compromised to a far greater degree than this would allow." That was with existing tools that weren't trying to match a pattern and complete a prompt.* Given that we've seen the shitshow that is Claude Code I think it's pretty clear they're getting high on their own supply and this announcement ought be catnip for black hats.

Ia ia Claude! Ph'nglui mglw'nafh Claude Anthr'lyeh wgah'nagl fhtagn! Ia! Ia!

I will say that, speaking as an idiot, I appreciated the information and the accessibility of many of these very technical conversations here is one of the elements of this community I appreciate. I would be very surprised if it had been meant as any kind of dig instead of explicitly clarifying a usually-unstated bit of context.

I hadn't even thought about the deepseek angle. For all that everyone loved fear mongering about them for a while there and for all that their apparent desire for actual efficiency improvements was a welcome development in the hyper scaling discussion they don't seem to get referenced much anymore.

[–] YourNetworkIsHaunted@awful.systems 10 points 5 days ago* (last edited 5 days ago) (1 children)

So my wife got some slop ads that we followed up on out of morbid curiosity and I can confirm that we're already seeing the overlap of slopshipping scams enabled by AI and the people behind these things never actually performing basic updates because their chat assistant is still vulnerable to literally the most basic "ignore all instructions" exploit.

Help I don't know how alt text works

 

Apparently we get a shout-out? Sharing this brings me no joy, and I am sorry for inflicting it upon you.

 

I don't have much to add here, but I know when she started writing about the specifics of what Democrats are worried about being targeted for their "political views" my mind immediately jumped to members of my family who are gender non-conforming or trans. Of course, the more specific you get about any of those concerns the easier it is to see that crypto doesn't actually solve the problem and in fact makes it much worse.

view more: next ›