this post was submitted on 18 Feb 2026
583 points (97.2% liked)

Fuck AI

5920 readers
1818 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Unavailable at source.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] TheSeveralJourneysOfReemus@lemmy.world 0 points 16 hours ago (1 children)

Where do we bagin? It's a lot of words to say that gpt can sommarise the text for you. Not only that, you'be required to trust that summary, otherwise there wouldn't be AI use in general.

Summary? That is a wrong words. A summary is a reasoned synospsis made with intent. AI just generates a whole new text using the original as a prompt. It's not a summary of anything in particular, it's a new document.

You can, instead, learn to search properly, using trusted sources and using keyword search per trusted source. Take note of the links and the site abstracts.

Check the authors of the articles you read, make sure that they're real people.

Ethics in research are not replaceable by ai. Sooner or later you'll get there.

[โ€“] FauxLiving@lemmy.world 1 points 10 hours ago

You're arguing against the use of AI to do actual research. I agree with you that using AI to do research is wrong. I'm not sure where you got any other idea.

My entire point, the statement that I was responding to, was a claim that LLMs hallucinate sources. That's only true of naive uses of LLMs, if you just ask a model to recite a fact it will hallucinate a lot of the time. This is why they are used in RAG systems and, in these systems, the citations are tracked through regular software because every AI researcher knows that LLMs hallucinate. That hasn't been new information for 5+ years now.

Systems that do RAG search summarizations, as in my example, both increase the accuracy of the response (by inserting the source documents in to the context window) and avoid relying on LLMs to handle citations.

It's one thing to hate the damage that billionaires are doing to the world in order to chase some pipedream about AI being the holy grail of technology. I'm with you there, fuck AI.

It's a whole other thing to pretend that machine learning is worthless or incapable of being a good tool in the right situations. You've been relying on machine learning tools for a long time, you say 'learn to search properly'. The search results that you receive are entirely built on ancestors of the PageRank machine learning algorithm which is responsible for creating Google.

The only reason that AI is even on your radar (assuming you're not in academia) is because a bunch of rich assholes are exploiting people's amazement at this new technology to sell impossible dreams to people in order to cash in on the ignorance of others. Those people are scammers with MBAs, but their scam doesn't change the usefulness of the underlying technology of Transformer neural networks or Machine Learning in general.

Fighting against 'AI' is pointless if your target is LLMs and not billionaires.