this post was submitted on 18 Feb 2026
504 points (96.8% liked)

Fuck AI

5920 readers
2565 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Unavailable at source.

you are viewing a single comment's thread
view the rest of the comments
[–] FauxLiving@lemmy.world -5 points 1 day ago (2 children)

Yeah, if you already know where you're going then sure, add it to Dashy or make a bookmark in your browser.

But, if you're going to search for something anyway. Then why would you use regular search and skim the tiny amount of random text that gets returned with Google's results? In the same amount of time, you could dump the entire contents of the pages into an LLM's context window and have it tailor the response to your question based on the text.

You still have to actually click on some links to get to the real information, but a summary generated from the contents of the results is more likely to be relevant than the text presented in Google's results page. In both cases you still have a list of links, generated by a search engine and not AI, which are responsive to your query.

Where do we bagin? It's a lot of words to say that gpt can sommarise the text for you. Not only that, you'be required to trust that summary, otherwise there wouldn't be AI use in general.

Summary? That is a wrong words. A summary is a reasoned synospsis made with intent. AI just generates a whole new text using the original as a prompt. It's not a summary of anything in particular, it's a new document.

You can, instead, learn to search properly, using trusted sources and using keyword search per trusted source. Take note of the links and the site abstracts.

Check the authors of the articles you read, make sure that they're real people.

Ethics in research are not replaceable by ai. Sooner or later you'll get there.

[–] jackr@lemmy.dbzer0.com 1 points 8 hours ago (1 children)

see, the problem is that I am not going to be reading that text because I know it is unreliable and ai text makes my eyes glaze over, so I will be clicking on all those links until I find something that is reliable. On a search engine I can just click through every link or refine my search with something like site:reddit.com site:wikipedia.org or format:pdf or something similar. With a chatbot, I need to write out the entire question, look at the four or so links it provided and then reprompt it if it doesn't contain what I'm looking for. I also get a limited amount of searches per day because I am not paying for a chatbot subscription. This is completely pointless to me.

[–] FauxLiving@lemmy.world 1 points 8 hours ago (1 children)

I'm not sure what standards you're saying unreliable.

You can see in the example that I provided it correctly answered the question and also correctly cited the place where the answer came from in the exact same amount of time as it would take to type the query into Google.

Yes, LLMs by themselves can hallucinate and do so at a high rate so that they're unreliable sources of information. That is 100% true. It will never be fixed, because LLMs are trained to be an autocorrect and produce syntactically correct language. You should never depend on raw LLM generated text from an empty context, like from a chatbot.

The study of this in academia (example: https://arxiv.org/html/2312.10997v5) has found that LLMs hallucination rate can be dropped to almost nothing (less than a human) if given text containing the information that it is being asked about. So, if you paste a document into the chat and ask it a question about the document the hallucination rate drops significantly.

This finding created a technique called Retrieval Augmented Generation where you use some non-AI means of finding data, like a search engine, and then put the documents into the context window along with the question. This makes it so that you can create systems that use LLMs for the tasks that they're accurate and fast at (like summarizing text that is in the context window) and non-AI tools to do things that require accuracy (like searching databases for facts and tracking citation).

You can see in the images I posted that it both answered the question and also correctly cited the source which was the entire point of contention.

[–] jackr@lemmy.dbzer0.com 1 points 3 hours ago

The study of this in academia

you are linking to an arxiv preprint. I do not know these researchers. there is nothing that indicates to me that this source is any more credible than a blog post.

has found that LLM hallucination rate can be dropped to almost nothing

where? It doesn't seem to be in this preprint, which is mostly a history of RAG and mentions hallucinations only as a problem affecting certain types of RAG more than other types. It makes some relative claims about accuracy that suggest including irrelevant data might make models more accurate. It doesn't mention anything about “hallucination rate being dropped to almost nothing”.

(less than a human)

you know what has a 0% hallucination rate about the contents of a text? the text

You can see in the images I posted that it both answered the question and also correctly cited the source which was the entire point of contention.

this is anecdotal evidence, and also not the only point of contention. Another point was, for example, that ai text is horrible to read. I don't think RAG(or any other tacked-on tool they've been trying for the past few years) fixes that.