this post was submitted on 25 May 2025
27 points (100.0% liked)

TechTakes

1911 readers
84 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 3) 25 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 7 points 1 week ago

New Bluesky post from Baldur Bjarnason:

What’s missing from the now ubiquitous “LLMs are good for code” is that code is a liability. The purpose of software is to accomplish goals with the minimal amount of code that’s realistically possible

LLMs may be good for code, but they seem to be a genuine hazard for collaborative software dev

[–] gerikson@awful.systems 6 points 1 week ago (1 children)

I hate I'm so terminally online I found out about the rumor that Musk and Stephen Miller's wife are bumping uglies through a horrorfic parody account

https://mastodon.social/@bitterkarella@sfba.social/114593332907413196

[–] Architeuthis@awful.systems 5 points 1 week ago

Midnight Pals is pretty great.

[–] BlueMonday1984@awful.systems 6 points 1 week ago (1 children)

New artcle from Brian Merchant: An 'always on' OpenAI device is a massive backlash waiting to happen

Giving my personal thoughts on the upcoming OpenAI Device^tm^, I think Merchant's correct to expect mass-scale backlash against the Device^tm^ and public shaming/ostracisation of anyone who decides to use it - especially considering its an explicit repeat of the widely clowned on Humane AI Pin.

headlines of Device^tm^ wearers getting their asses beaten in the street to follow soon afterwards. As Brian's noted, a lot of people would see wearing an OpenAI Device^tm^ as an open show of contempt for others, and between AI's public image becoming utterly fouled by the bubble and Silicon Valley's reputation going into the toilet, I can see someone seeing a Device^tm^ wearer as an opportunity to take their well-justified anger at tech corps out on someone who openly and willingly bootlicks for them.

[–] YourNetworkIsHaunted@awful.systems 5 points 1 week ago (2 children)

Part of me wonders if this is even supposed to be a profitable hardware product or if they're sufficiently hard-up for training data that "put always-on microphones in as many pockets as possible" seems like a good strategy.

It's not, both because it's kinda evil and because it's definitely stupid, but I can see it being used to solve the data problem more quickly than I can see anyone think this is actually a good or useful product to create.

[–] Architeuthis@awful.systems 4 points 1 week ago (1 children)

What does solving the data problem supposed to look like exactly? A somewhat higher score in their already incredibly suspect benchmarks?

The data part of the whole hyperscaling thing seems predicated on the belief that the map will magically become the territory if only you map hard enough.

I fully agree, but as data availability is one of the primary limits that hyperscaling is running up against I can see the true believers looking for additional sources, particularly sources that aren't available to their competitors. Getting a new device in people's pockets with a microphone and an internet link would be one such advantage, and (assuming you believe the hyperscaling bullshit) would let OpenAI rebuild some kind of moat to keep themselves ahead of the competition.

I don't know, though. Especially after the failure of at least 2 extant versions of the AI companion product I just can't imagine anyone honestly believing there's enough of a market for this to justify even the most ludicrously optimistic estimate of the cost of bringing it to market. It's either a data thing or a straight-up con to try and retake the front page for another few news cycles. Even the AI bros can't be dumb enough for it to be a legit effort.

[–] o7___o7@awful.systems 3 points 1 week ago* (last edited 1 week ago)

When I get a minute, I intend to do a back of the napkin calc to figure out how many words 100 million of these things would hear on an average day.

100 million sounds like a target that was naively pooped out by some other requirement, like "How much training data do we need to scale to GPT-5 before the money runs out, assuming the dumbest interpolation imaginable?"

[–] froztbyte@awful.systems 6 points 1 week ago (28 children)

I regret to inform you that, once again, aella (via this)

fucked that this is at least moderately honest

load more comments (28 replies)
[–] dgerard@awful.systems 5 points 1 week ago* (last edited 1 week ago) (2 children)

currently reading https://arxiv.org/abs/2404.17570

this is PsiQuantum, who are hot prospects to build an actually-quantum computer

no, they have not yet factored 35

but they are seriously planning qubits on a wafer and they think they can make a chip with 1m noisy qubits

anyone know more about this? does that preprint (from last year) pass sniff tests?

(my interest is journalistic, and also the first of these companies to factor 35 gets all the VC money ever)

load more comments (2 replies)
load more comments
view more: ‹ prev next ›