What in the sweet fuck happened here, does this count as vandalism?
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Fix0red
None of those are well defined "problems". An entire applied research field is not a "problem" akin to other items on this list lik P vs NP.
On this topic I've been seeing more 503 lately, are the servers running into issue, or am i getting caught in anti-scraper cross-fire?
nope, you’ve been getting caught in the fallout from us not having this yet. the scrapers have been so intense they’ve been crashing the instance repeatedly.
lemmy.ml by way of hexbear's technology comm: The Economist is pushing phrenology. Everything old is new again!

cross-posted from: https://lemmy.ml/post/38830374
[...]
EDIT: Apparently based off something published by fucking Yale:
Reminds me of the "tech-bro invents revolutionary new personal transport solution: a train!" meme, but with racism. I'll be over in the angry dome.
What does it tell about a scientist that they see a wide world of mysteries to dive into and the research topic they pick is "are we maybe missing out on a way we could justify discriminating against people for their innate characteristics?"
"For people without access to credit, that could be a blessing" fuck off no one is this naive.
Eurogamer has opinions about genai voices in games.
Arc Raiders is set in a world where humanity has been driven underground by a race of hostile robots. The contradiction here between Arc Raiders' themes and the manner of its creation is so glaring that it makes me want to scream. You made a game about the tragedy of humans being replaced by robots while replacing humans with robots, Embark!
Not a scream just a nice thing people might enjoy. Somebody made a funny comic about what we all are thinking about
Random screenshot which I found particularly funny (Zijn rant klopt):

Image description
Two people talking to each other, one a bald heavily bespectacled man in the distance, and the other a well dressed skullfaced man with a big mustache. Conversation goes as follows:
"It could be the work of the French!"
"Or the Dutch"
"Could even be the British!"
"Filthy pseudo German apes, The Dutch!"
"The Russ..."
"Scum of the earth marsh dwelling Dutch"
Just saw this post on DHH, it’s really good: https://okayfail.com/2025/in-praise-of-dhh.html
If you’re not careful, one day you will wake up and find that you’ve pushed away every person who ever disagreed with you.
David Hamburger-Helper
A fact-generative AI, you say? https://aphyr.com/posts/398-the-future-of-fact-checking-is-lies-i-guess
Fresh from the presses: OpenAI loses song lyrics copyright case in German court
GEMA (weird german authors' rights management organisation) is suing OpenAI over replication of song lyrics among other stuff, seeking a license deal. Judge rules that whatever the fuck OpenAI does behind the scenes is irrelevant, if it can replicate lyrics exactly that's unlawful replication.
One of GEMA's lawyers expects the case to be groundbreaking in europe, since the applicable rules are harmonized.
I cannot believe I am rooting for GEMA. What a weird world this has become.
I doubt I'm the first one to think of this, but for some reason as I was drifting off to sleep last night, I was thinking about the horrible AI "pop" music that a lot of content farms use in their videos and my brain spat out the phrase Bubblegum Slop. Feel free to use it as you ses fit (or don't, I ain't your dad).
Via Reddit!SneerClub: "Investors’ ‘dumb transhumanist ideas’ setting back neurotech progress, say experts"
Michael Hendricks, a professor of neurobiology at McGill, said: “Rich people who are fascinated with these dumb transhumanist ideas” are muddying public understanding of the potential of neurotechnology. “Neuralink is doing legitimate technology development for neuroscience, and then Elon Musk comes along and starts talking about telepathy and stuff.”
Fun article.
Altman, though quieter on the subject, has blogged about the impending “merge” between humans and machines – which he suggested would either through genetic engineering or plugging “an electrode into the brain”.
Occasionally I feel that Altman may be plugged into something that's even dumber and more under the radar than vanilla rationalism.
These people aren't looking for scientists, they're looking for alchemists
oh no not another cult. The Spiralists????
it's funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn't there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I've heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up
some of their communities that somebody collated (I don't think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/
Third episode of Odium Symposium is out (that’s the podcast I cohost). We talk about Cato the Elder and his struggle against militant feminist action in the roman republic. You can listen to the episode at https://www.patreon.com/posts/crack-sucking-143019155 or through any of the sources on our website, www.odiumsymposium.com
Just so we all know, not liking AI slop code is xenophobic.
Definitely been seeing the pattern of: “if you don’t like AI, you are being x-phobic” where “x” is a marginalised group that the person is using the name of as a cudgel. They probably never cared about this group before, but what’s important to this person is that they glaze AI over any sort of principle or ethics. Usually it’s ableist, as is basically any form of marginalisation/discrimination.
E: read the link. Lmao that’s… not xenophobia. What a piece of shit
Some Chat-GPT user queries were leaked via some Google Search Analytics owned by websites that ranked on the search result pages that Chat-GPT saw when searching: https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/
Or something like that. It's a little confusing.
I want to keep bots from scraping my content because I don't want to feed the slop machine.
You want to keep bots from scraping your content because you're afraid it's gonna learn how to take over the world.
We are not the same.
(edit: advance warning that clicking these links might cause eyestrain and trigger rage)
so for a while now sheer outrageous ludicrous nonsense of the trumpist-era USA politics has been making a bit of an impact on the local ZA racists (and, weirdly, not only the white nationalists but also the black nationalists - some of it has shone through in EFF and BFLF propaganda strains), and I knew that with the orange godawful-king ascension to his hoped-throne it was only a matter of time before shit here escalated
anyway, it's happened. the same organisation also put up some ads along the main highway ahead of the G20 summit
(upside: some of those have already been pulled down. downside: the org put up some more. don't know what's happened with the latest yet)
fuck these people
One thing I've heard repeated about OpenAI is that "the engineers don't even know how it works!" and I'm wondering what the rebuttal to that point is.
While it is possible to write near-incomprehensible code and make an extremely complex environment, there is no reason to think there is absolutely no way to derive a theory of operation especially since any part of the whole runs on deterministic machines. And yet I've heard this repeated at least twice (one was on the Panic World pod, the other QAA).
I would believe that it's possible to build a system so complex and with so little documentation that on its surface is incomprehensible but the context in which the claim is made is not that of technical incompetence, rather the claim is often hung as bait to draw one towards thinking that maybe we could bootstrap consciousness.
It seems like magical thinking to me, and a way of saying one or both of "we didn't write shit down and therefore have no idea how the functionality works" and "we do not practically have a way to determine how a specific output was arrived at from any given prompt." The first might be in part or on a whole unlikely as the system would need to be comprehensible enough so that new features could get added and thus engineers would have to grok things enough to do that. The second is a side effect of not being able to observe all actual input at the time a prompt was made (eg training data, user context, system context could all be viewed as implicit inputs to a function whose output is, say, 2 seconds of Coke Ad slop).
Anybody else have thoughts on countering the magic "the engineers don't know how it works!"?
well, I can't counter it because I don't think they do know how it works. the theory is shallow yet the outputs of, say, an LLM are of remarkably high quality in an area (language) that is impossibly baroque. the lack of theory and fundamental understanding presents a huge problem for them because it means "improvements" can only come about by throwing money and conventional engineering at their systems. this is what I've heard from people in the field for at least ten years.
to me that also means it isn't something that needs to be countered. it's something the context of which needs to be explained. it's bad for the ai industry that they don't know what they're doing
EDIT: also, when i say the outputs are of high quality, what i mean is that they produce coherent and correct prose. im not suggesting anything about the utility of the outputs
"That's a lovely kernel you have there how about if we improve it a bit with some AI."
SoftBank Cashes Out Nvidia Stock (Archive)
I'm too jaded to get my hopes up, but wouldn't it be so nice if this were the beginning of the end for the AI bubble?
Either they're right, and the bubble pops soon, or they're too early and it's another hilarious Softbank L
my read of the situation is that it's another phenomenal softbank L even if they timed this sale at the top of the nvdia valuation. if they thought it's a bubble popping soon, they would try to get out of openai deals, but they're doing the opposite. most immediately, they need money to dump 20B-ish into openai by end of the year, triggered by that no-profit transition, and it's money that they apparently don't have. that their stock dumped like 15% in a week probably didn't help either
This. Masayoshi son is selling furniture to YOLO more money into OpenAI.
this has increased the comedy
if you think it's stupid, it's not as stupid as it sounds, it's worse. they also sold some tmobile stock and took debt backed by ownership of arm. it's like they instinctively get rid of pieces of ai bubble that retains some money and hold to pieces that are black holes
Iphone Pocket: Inspired by the concept of “a piece of cloth"
Not satire I superpromise
ah, and they’ve got a community feedback forum post, where it isn’t going the way they might have expected: https://connect.mozilla.org/t5/discussions/building-ai-the-firefox-way-shaping-what-s-next-together/td-p/109922
new zitron: ed picks up calculator and goes through docs from microsoft and some others, and concludes that openai has less revenue than thought previously (probably?, ms or openai didn't comment), spends more on inference than thought previously, openai revenue inferred from microsoft share is consistently well under inference costs https://www.wheresyoured.at/oai_docs/
Before publishing, I discussed the data with a Financial Times reporter. Microsoft and OpenAI both declined to comment to the FT.
If you ever want to share something with me in confidence, my signal is ezitron.76, and I’d love to hear from you.
also on ft (alphaville) https://www.ft.com/content/fce77ba4-6231-4920-9e99-693a6c38e7d5
ed notes that there might be other revenue, but that's only inference with azure, and then there are training costs wherever it is filed under, debts, commitments, salaries, marketing, and so on and so on
e: fast news day today eh?
Pavan Davuluri is apparently the “president of windows and devices” at microsoft. I, for one, am glad that I moved to linux when windows 10 got the axe, before anything tried to agenticify my pc.
Also, when did “frontier” become “first in lines to drink whatever it is the cult leader is serving up”?
https://xcancel.com/pavandavuluri/status/1987942909635854336#m

alt text
Windows is evolving into an agentic OS, connecting devices, cloud, and AI to unlock intelligent productivity and secure work anywhere. Join us at #MSIgnite to see how frontier firms are transforming with Windows and what’s next for the platform. We can’t wait to show you!
"Agentic" is meant to seem sci-fi, but I can't help but think it's terminal business-speak. It's the clearest statement yet of the attempted redesign of the computer from a personal device to a distinct entity separate from oneself. One is no longer a user or administrator, one is instead passively waiting for "agents" to complete a task on one's behalf. This model is imposed from the top down, to be the strongest reinforcement yet of the all-important moat around the big vendors' cloud businesses. Once you're in deep with "agents," your workflows will probably be so hopelessly tangled, vendor-specific, and non-debuggable/non-reimplementable that migrating them to another vendor would be a nightmare task orders of magnitude beyond any database or CRM migration. If your workflows even get any work done anymore at all.
It's worth noting how much the whole "agentic" marketing scheme is the opposite of this reality, too. Because after all the dream they're selling is being able to do the Star Trek thing and just tell your computer to do it in plain English. But if that was what these companies were actually doing it would be very easy to migrate away if you wanted to, since you could just say "send me all our data in a format that $Competitor can easily onboard. I'm done with this shit" and then give the competitor's system the same plain English prompt. The reality is that they don't actually want to build the thing they're as advertising even if they could because their whole business model is to make interacting with the computer as high-friction as possible so you'll pay them to do it for you.


