Oh look at that, another report on the economics of ai datacenter buildouts https://publicenterprise.org/report/bubble-or-nothing/
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Lol check how bad openai is doing (also so much grok ow god this will end up horrible) https://openrouter.ai/rankings. In public transport atm so dont have much time for a bigger look but interesting if somebody would put this next to valuations (also not deepseek which nobody seems to talk about anymore, dont think the pivot to 'roleplay' will work for openai)
The site is clearly vibe coded as it runs like ass on my phone
What in the sweet fuck happened here, does this count as vandalism?
Fix0red
None of those are well defined "problems". An entire applied research field is not a "problem" akin to other items on this list lik P vs NP.
lemmy.ml by way of hexbear's technology comm: The Economist is pushing phrenology. Everything old is new again!

cross-posted from: https://lemmy.ml/post/38830374
[...]
EDIT: Apparently based off something published by fucking Yale:
Reminds me of the "tech-bro invents revolutionary new personal transport solution: a train!" meme, but with racism. I'll be over in the angry dome.
What does it tell about a scientist that they see a wide world of mysteries to dive into and the research topic they pick is "are we maybe missing out on a way we could justify discriminating against people for their innate characteristics?"
"For people without access to credit, that could be a blessing" fuck off no one is this naive.
Not a scream just a nice thing people might enjoy. Somebody made a funny comic about what we all are thinking about
Random screenshot which I found particularly funny (Zijn rant klopt):

Image description
Two people talking to each other, one a bald heavily bespectacled man in the distance, and the other a well dressed skullfaced man with a big mustache. Conversation goes as follows:
"It could be the work of the French!"
"Or the Dutch"
"Could even be the British!"
"Filthy pseudo German apes, The Dutch!"
"The Russ..."
"Scum of the earth marsh dwelling Dutch"
Eurogamer has opinions about genai voices in games.
Arc Raiders is set in a world where humanity has been driven underground by a race of hostile robots. The contradiction here between Arc Raiders' themes and the manner of its creation is so glaring that it makes me want to scream. You made a game about the tragedy of humans being replaced by robots while replacing humans with robots, Embark!
I’m being shuffled sideways into a software architecture role at work, presumably because my whiteboard output is valued more than my code 😭 and I thought I’d try and find out what the rest of the world thought that meant.
Turns out there’s almost no way of telling anymore, because the internet is filled with genai listicles on random subjects, some of which even have the same goddamn title. Finding anything from the beforetimes basically involves searching reddit and hoping for the best.
Anyway, I eventually found some non-obviously-ai-generated work and books, and it turns out that even before llms flooded the zone with shit no-one knew what software architecture was, and the people who opined on it were basically in the business of creating bespoke hammers and declaring everything else to be the specific kind of nails that they were best at smashing.
Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.
A fact-generative AI, you say? https://aphyr.com/posts/398-the-future-of-fact-checking-is-lies-i-guess
Stupid chatbots marketed at gullible christians aren’t new,
The app Text With Jesus uses artificial intelligence and chatbots to offer spiritual guidance to users who are looking to connect with a higher power.
bit this is certainly an unusual USP:
Premium users can also converse with Satan.
https://www.nbcphiladelphia.com/news/tech/religious-chatbot-apps/4302361/
(via parker molloy’s bluesky)
Just saw this post on DHH, it’s really good: https://okayfail.com/2025/in-praise-of-dhh.html
If you’re not careful, one day you will wake up and find that you’ve pushed away every person who ever disagreed with you.
David Hamburger-Helper
A lesswronger wrote an blog post about avoiding being overly deferential, using Eliezer as an example of someone that gets overly deferred to. Of course, they can't resist glazing him, even in the context of an blog post on not being too deferential:
Yudkowsky, being the best strategic thinker on the topic of existential risk from AGI
Another lesswronger pushes back on that and is highly upvoted (even among the doomers that think Eliezer is a genius, most of them still think he screwed up in inadvertently helping LLM companies get to where they are): https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1?commentId=MSAkbpgWLsXAiRN6w
The OP gets mad because this is off topic from what they wanted to talk about (they still don't acknowledge the irony).
A few days later they write an entire post, ostensibly about communication norms, but actually aimed at slamming the person that went off topic: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse
And of course the person they are slamming comes back in for another round of drama: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse?commentId=s4GPm9tNmG6AvAAjo
No big point to this, just a microcosm of lesswrongers being blind to irony, sucking up to Eliezer, and using long winded posts about meta-norms and communication as a means of fighting out their petty forum drama. (At least us sneerclubers are direct and come out and say what we mean on the rare occasions we have beef among ourselves.)
Fresh from the presses: OpenAI loses song lyrics copyright case in German court
GEMA (weird german authors' rights management organisation) is suing OpenAI over replication of song lyrics among other stuff, seeking a license deal. Judge rules that whatever the fuck OpenAI does behind the scenes is irrelevant, if it can replicate lyrics exactly that's unlawful replication.
One of GEMA's lawyers expects the case to be groundbreaking in europe, since the applicable rules are harmonized.
I cannot believe I am rooting for GEMA. What a weird world this has become.
AI researcher and known epstein associate Joscha Bach comes up several times in the latest epstein email dump. And it's uh, not good. Greatest hits include: scientific racism, bigotry freestyling about the neoteny principle, climate fascism and managed decline of "undesirable groups" juxtaposed immediately with opining about the emotional influence of 5 visits to buchenwald. You know, just very cool stuff:
Also appearing is friend of the pod and OpenAI board member Larry Summers!
The emails have Summers reporting to Epstein about his attempts to date a Harvard economics student & to hit on her during a seminar she was giving.
https://bsky.app/profile/econmarshall.bsky.social/post/3m5p6dgmagb2a
To quote myself: Larry Summers was one of the few people I've ever met where a casual conversation made me want to take a shower immediately afterward. I crashed a Harvard social event when a friend was an undergrad there and I was a student at MIT, in order to get the free food, and he was there to do glad-handing in his role as university president. I had a sharp discomfort response at the lizard-brain level
a deep part of me going on the alert, signaling "this man is not to be trusted" in the way one might sense that there is rotten meat nearby.
I still say that the term "scientific racism" gives these fuckos too much credit. I've been saying "numberwang racism" instead.
Gerard and Torres get namedropped in the same breath as Ziz as people who have done damage to the rationalist movement from within
I doubt I'm the first one to think of this, but for some reason as I was drifting off to sleep last night, I was thinking about the horrible AI "pop" music that a lot of content farms use in their videos and my brain spat out the phrase Bubblegum Slop. Feel free to use it as you ses fit (or don't, I ain't your dad).
oh no not another cult. The Spiralists????
it's funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn't there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I've heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up
some of their communities that somebody collated (I don't think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/
Via Reddit!SneerClub: "Investors’ ‘dumb transhumanist ideas’ setting back neurotech progress, say experts"
Michael Hendricks, a professor of neurobiology at McGill, said: “Rich people who are fascinated with these dumb transhumanist ideas” are muddying public understanding of the potential of neurotechnology. “Neuralink is doing legitimate technology development for neuroscience, and then Elon Musk comes along and starts talking about telepathy and stuff.”
Fun article.
Altman, though quieter on the subject, has blogged about the impending “merge” between humans and machines – which he suggested would either through genetic engineering or plugging “an electrode into the brain”.
Occasionally I feel that Altman may be plugged into something that's even dumber and more under the radar than vanilla rationalism.
These people aren't looking for scientists, they're looking for alchemists
Just so we all know, not liking AI slop code is xenophobic.
Definitely been seeing the pattern of: “if you don’t like AI, you are being x-phobic” where “x” is a marginalised group that the person is using the name of as a cudgel. They probably never cared about this group before, but what’s important to this person is that they glaze AI over any sort of principle or ethics. Usually it’s ableist, as is basically any form of marginalisation/discrimination.
E: read the link. Lmao that’s… not xenophobia. What a piece of shit
Third episode of Odium Symposium is out (that’s the podcast I cohost). We talk about Cato the Elder and his struggle against militant feminist action in the roman republic. You can listen to the episode at https://www.patreon.com/posts/crack-sucking-143019155 or through any of the sources on our website, www.odiumsymposium.com
Some Chat-GPT user queries were leaked via some Google Search Analytics owned by websites that ranked on the search result pages that Chat-GPT saw when searching: https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/
Or something like that. It's a little confusing.
I want to keep bots from scraping my content because I don't want to feed the slop machine.
You want to keep bots from scraping your content because you're afraid it's gonna learn how to take over the world.
We are not the same.
Iphone Pocket: Inspired by the concept of “a piece of cloth"
Not satire I superpromise
One thing I've heard repeated about OpenAI is that "the engineers don't even know how it works!" and I'm wondering what the rebuttal to that point is.
While it is possible to write near-incomprehensible code and make an extremely complex environment, there is no reason to think there is absolutely no way to derive a theory of operation especially since any part of the whole runs on deterministic machines. And yet I've heard this repeated at least twice (one was on the Panic World pod, the other QAA).
I would believe that it's possible to build a system so complex and with so little documentation that on its surface is incomprehensible but the context in which the claim is made is not that of technical incompetence, rather the claim is often hung as bait to draw one towards thinking that maybe we could bootstrap consciousness.
It seems like magical thinking to me, and a way of saying one or both of "we didn't write shit down and therefore have no idea how the functionality works" and "we do not practically have a way to determine how a specific output was arrived at from any given prompt." The first might be in part or on a whole unlikely as the system would need to be comprehensible enough so that new features could get added and thus engineers would have to grok things enough to do that. The second is a side effect of not being able to observe all actual input at the time a prompt was made (eg training data, user context, system context could all be viewed as implicit inputs to a function whose output is, say, 2 seconds of Coke Ad slop).
Anybody else have thoughts on countering the magic "the engineers don't know how it works!"?
well, I can't counter it because I don't think they do know how it works. the theory is shallow yet the outputs of, say, an LLM are of remarkably high quality in an area (language) that is impossibly baroque. the lack of theory and fundamental understanding presents a huge problem for them because it means "improvements" can only come about by throwing money and conventional engineering at their systems. this is what I've heard from people in the field for at least ten years.
to me that also means it isn't something that needs to be countered. it's something the context of which needs to be explained. it's bad for the ai industry that they don't know what they're doing
EDIT: also, when i say the outputs are of high quality, what i mean is that they produce coherent and correct prose. im not suggesting anything about the utility of the outputs
I mean if you ever toyed around with neural networks or similar ML models you know it's basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.
There's a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There's no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.
In other words, "engineers don't know how it works" can have two meanings - that they're hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don't have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it's not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don't know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn't collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I'm aware, largely true, or at least I haven't seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it'd be a major achievement everyone would be talking about.


