Looks like he added a notice / disclaimer at the top last night? The talk page has some quality sneers - https://en.wikipedia.org/wiki/Wikipedia_talk:Wikipedia_Signpost/2026-01-15/Special_report
nfultz
Wikipedia at 25: A Wake-Up Call h/t metafilter
It's a good read overall, makes some good points about global south.
The hostility to AI tools within parts of our community is understandable. But it's also strategic malpractice. We've seen this movie before, with Wikipedia itself. Institutions that tried to ban or resist Wikipedia lost years they could have spent learning to work with it. By the time they adapted, the world had moved on.
AI isn't going away. The question isn't whether to engage. It's whether we'll shape how our content is used or be shaped by others' decisions.
Short of wikipedia shipping it's own chatbot that proactively pulls in edits and funnels traffic back I think the ship has sailed. But it's not unique, same thing is happening to basically everything with a CC license including SO and FOSS writ large. Maybe the right thing to is put new articles are AGPL or something, a new license that taints an entire LLM at train time.
EDIT
I mean props for at least self hosting in a home lab instead of inventing Gas Town. But all the annoying parts of software (IE DevOps, mobile development, etc), that's all self inflicted and we could fix the foundations or build better ones, instead of hoping an llm can stack things on top of something inherently shaky.
I’ll be brutally honest about that question: I think that if “they might train on my code / build a derived version with an LLM” is enough to drive you away from open source, your open source values are distinct enough from mine that I’m not ready to invest significantly in keeping you. I’ll put that effort into welcoming the newcomers instead.
No he won't.
I’ve found myself affected by this for open source dependencies too. The other day I wanted to parse a cron expression in some Go code. Usually I’d go looking for an existing library for cron expression parsing—but this time I hardly thought about that for a second before prompting one (complete with extensive tests) into existence instead.
He /knows/ about pcre but would rather prompt instead. And pretty sure this was already answered on stack overflow before 2014.
That one was a deliberately provocative question, because for a new HTML5 parsing library that passes 9,200 tests you would need a very good reason to hire an expert team for two months (at a cost of hundreds of thousands of dollars) to write such a thing. And honestly, thanks to the existing conformance suites this kind of library is simple enough that you may find their results weren’t notably better than the one written by the coding agent.
He didn't write a new library from scratch, he ported one from Python. I could easily hire two undergrads to change some tabs to curlies, pay them in beer, and yes, I think it /would/ be better, because at least they would have learned something.
rokos basi-list
From a new white paper Financing the AI boom: from cash flows to debt, h/t The Syllabus Hidden Gem of the Week
The long-term viability of the AI investment surge depends on meeting the high expectations embedded in those investments, with a disconnect between debt pricing and equity valuations. Failure to meet expectations could result in sharp corrections in both equity and debt markets. As shown in Graph 3.C, the loan spreads charged on private credit loans to AI firms are close to those charged to non-AI firms. If loan spreads reflect the risk of the underlying investment, this pattern suggests that lenders judge AI-related loans to be as risky as the average loan to any private credit borrower. This stands in stark contrast to the high equity valuations of AI companies, which imply outsized future returns. This schism suggests that either lenders may be underestimating the risks of AI investments (just as their exposures are growing significantly) or equity markets may be overestimating the future cash flows AI could generate.
Por que no los dos? But maybe the lenders are expecting a bailout... or just gullible...
That said, to put the macroeconomic consequences into perspective, the rise in AI-related investment is not particularly large by historical standards (Graph 4.A). For example, at around 1% of US GDP, it is similar in size to the US shale boom of the mid-2010s and half as large as the rise in IT investment during the dot-com boom of the 1990s. The commercial property and mining investment booms experienced in Japan and Australia during the 1980s and 2010s, respectively, were over five times as large relative to GDP.
Interesting point, if AI is basically a rounding error for GDP... But I also remember the layoffs in 2000-1 and 2014-5, they weren't evenly distributed and a lot of people got left behind, even if they weren't as bad as '08.
https://www.linkedin.com/posts/coquinn_generativeai-gartner-ibm-activity-7415515266849124352-W2n5
I’ve finally cracked how Gartner’s “Features” axis works.
It’s not latency.
It’s not context windows.
It’s definitely not “can this thing form a coherent thought.”
It’s Enterprise Friction™.
By that metric, Gartner has ranked IBM—a company whose flagship product is currently “billable hours in a trench coat”—ahead of Anthropic, the people who actually build the models IBM is desperately trying to resell with a logo swap.
Ranking IBM over Anthropic in 2025 is like ranking a library card catalog over Google Search because the library has better governance, stronger controls, and more shelves you can lock.
Anthropic is building the frontier.
IBM is building a PowerPoint about the frontier that requires a three-year commit, seven steering committees, and a ceremonial blood sacrifice to Red Hat.
Gartner analysts: blink twice if the blue suits are in the room with you.
nice find there:
A progressive campaign, "The Great Slate", was successful in raising funds for candidates in part by asking for contributions from tech workers in return for not posting similar quotes by Raymond. Matasano Security employee and Great Slate fundraiser Thomas Ptacek said, "I've been torturing Twitter with lurid Eric S. Raymond quotes for years. Every time I do, 20 people beg me to stop." It is estimated that, as of March 2018, over $30,000 has been raised in this way.[32]
Oh I saw that name before - https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-article-on-ai/
Since someone linked to Bergstrom above, I wanted to mention his Marshack Colloquium talk from last year - https://www.youtube.com/watch?v=nxn40xiK9g0 - basically the idea is we are all "information foragers" but the "information environment" has shifted radically around us all in a really short amount of time. In "information abundance" the right strategy is to visit a lot more different sites instead of just a few, if the model / analogy works for people about as good as it does for ant eaters. If vibes are off, on to the next tab, it will broaden your worldview too.
PSA - https://consumer.drop.privacy.ca.gov/ - CA residents can now request data deletion to many adtech data brokers.
This is a lot more common than you'd think, several posts about abuse like this over at the academia stack exchange. If you think he used your writing, you could file a copyright claim on it since you are the author, not him. Do not waste your time with HR or honor committees, they will not do anything for you, their job is to cover the universities ass, not help. I honestly can't think of a case where going public led to anything more than a footnote on the persons wikipedia page, although it might be good for warning the incoming cohort of students.
If you're really sure about finishing your phd, it's probably pretty hard to xfer to a new school without LoRs, a strong publication record or bringing your own grant, but you might be able to switch depts if they're close enough, eg math <=> stats <=> CS. They might make you do comps / quals again though. But there's a pretty big diminishing returns to years 4+ of a phd, honestly, and I can assure you that there's assholes everywhere. Deans will yell at you too, and I've heard of a couple dept chairs that throw staplers. The tenure track does not incentivise not-being-an-asshole, at all, it is a rigidly hierarchical system and accompanying world view, at least in the R1s anyway.