this post was submitted on 18 Feb 2026
584 points (97.2% liked)
Fuck AI
5920 readers
1704 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Error rates that you simultaneously haven't defined and also have declared as too high to be usable.
These tools clearly work, much like a search engine clearly works. They have errors (find me clean search results) but we use them.
You could make the same argument about search. If you issued a query to Google and compared the results generated by the machine learning systems and then had a human read the entire Internet specifically trying to answer your query you would probably find that in the end (after a few decades) the human results would probably be more responsive to your query and the Google results, once you get to page 3 or 4 start to become random nonsense.
By any measure the Google results are worse than what a human would choose. This is why you have to 'learn' to search and to issue queries in a specific way, because otherwise you get errors/bad results.
The problem with the accurate human results is that if you had all of the people on the planet working full-time 365 days a year could not service a single minute worth of the queries that the Google machine learning algorithms serve up 24/7.
Could you read 3 books and find the answer that you want? Or craft some regular expression search to find it? Sure, but you can't do it faster than it takes to run a RAG search and inference 10 million tokens worth of text.
The whole point of search is that looking through every document every time that you want to find something is a waste of effort, using summarization allows you to more accurately survey larger volumes of data and search in what you're looking for. You never trust the output of the model, just like you don't cite Google's search results page or Wikipedia, because they are there to point you to information, not provide it. A RAG system gives you the citations for the data so once the summarization indicates that it has found what you're looking for then you can read for yourself.
Yes.
Here is a peer reviewed article published in Nature Medicine - https://pmc.ncbi.nlm.nih.gov/articles/PMC11479659/
The relevant section from the abstract:
Another published peer reviewed article posted in npj digital medicine - https://www.nature.com/articles/s41746-025-01670-7
Novel is given as a human unit of text, because you may not know what 10 million tokens means in terms of actual length. I'm clearly not talking about fictional novels read for entertainment.
https://lemmy.world/post/43275879/22220800
This is an example of a commercial tool which returns both the non-LLM generation of citations and the accurate summation of the contents of the article as it relates to the question.