this post was submitted on 08 Jun 2025
787 points (95.8% liked)
Technology
71146 readers
2860 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Did you even read this garbage? It’s just words strung together without any meaning. The things it’s claiming show a fundamental lack of understanding of what it is responding to.
This didn’t prove your point at all, quite the opposite. And it wasted everyone’s time in the process. Good job, this was worthless.
I did and it was because it didn't have the previous context. But it did find the fallacies as present. Logic is literally what a chat AI is going. A human still needs to review the output but it did what it was asked. I don't know AI programming well. But I can say that logic is algorithmic. An AI has no problem parsing an argument and finding the fallacies. It's a tool like any other.
That was a roundabout way of admitting you have no idea what logic is or how LLMs work. Logic works with propositions regardless of their literal meaning, LLMs operate with textual tokens irrespective of their formal logical relations. The chatbot doesn't actually do the logical operations behind the scenes, it only produces the text output that looks like the operations were done (because it was trained on a lot of existing text that reflects logical operations in its content).
This is why I said I wasn't sure how AI works behind the scenes. But I do know that logic isn't difficult. Just to not fuck around between us. I have a CS background. Only saying this because I think you may have it as well and we can save some time.
It makes sense to me that logic is something AI can parse easily. Logic in my mind is very easy if it can tokenize some text. Wouldn't the difficulty be if the AI has the right context.
Logic requires abstracting the argumentative form from the literal linguistic content and then generalising it, just how like math is done properly when you work with numbers and not just with sentences such as "two apples and three apples is five apples" (such abstraction in practice allows far more powerful and widely applicable operations than dealing with individual linguistic expressions; if you've ever solved very complex truth trees you'll know how they allow streamlining and solutions that would be practically impossible to do if you had only the ordinary linguistic expression of the same problem). Logic doesn't operate with textual tokens but with logical propositions and operators. "Difficulty" is not a meaningful term here, a tool is either technically capable of doing something (more or less successfully) or it isn't.
That LLMs aren't capable of this sort of precision and abstraction is shown by the OP link as well as the simple fact that chatbots used to be extremely bad at math (which is now probably patched up by adding a proper math module, rather than relying on the base LLM - my assumption, at least).
As for trying more examples of looking for logical fallacies, I tried out three different types of text. Since you say context is important, it's best to take only the beginning of a text. One text I tried is the opening of the Wikipedia article on "history", which ChatGPT described like this: "The passage you've provided is an informative and largely neutral overview of the academic discipline of history. It doesn't make any strong arguments or persuasive claims, which are typically where logical fallacies appear." It then went on to nitpick about some details "for the sake of thorough analysis", but basically had no real complaints. Then I tried out the opening paragraph of Moby-Dick. That's a fictional text so it would be reasonable to reject analysing its logical solidity, as GPT already did with the WP article, but it still tried to wring out some "criticism" that occasionally shows how it misunderstands the text (just as it misunderstood a part of my comment above). Finally, I asked it to find the fallacies in the first four paragraphs of Descartes' Meditations on First Philosophy, which resulted in a criticism that was based on less logically rigid principles than the original text (accusing Descartes of the "slippery slope fallacy").
I'll post the full replies below.
This famous passage from the opening of Moby-Dick by Herman Melville is rich in literary style but not necessarily constructed as an argument, which makes identifying formal logical fallacies a nuanced task. That said, we can identify a few rhetorical or informal fallacies—not because the text is poorly reasoned, but because it makes some sweeping or questionable assumptions in a stylistically persuasive way. Here's a breakdown:
1. Hasty Generalization
2. False Cause (Post Hoc)
3. Appeal to Emotion
4. False Analogy
Summary:
Would you like me to rewrite the passage without the fallacies, or is this for literary analysis?
Thank you! This passage is again drawn from René Descartes' Meditations on First Philosophy, and it outlines his method of radical doubt as a means to establish certainty in knowledge. While the reasoning is carefully crafted and influential in the history of philosophy, it still contains a few logical fallacies or reasoning weaknesses that can be critically assessed.
🔍 Identified Logical Fallacies and Weak Reasoning:
1. Hasty Generalization
2. False Analogy
3. Slippery Slope (Metaphorical)
4. Begging the Question (Circular Reasoning)
5. Inconsistency (Recognized, Not Fully Resolved)
✅ Summary Table of Fallacies
Conclusion:
Descartes' passage is methodologically rich, but from a logic and reasoning perspective, it contains:
Would you like a version of this reasoning revised to be more logically sound, or would you like help analyzing the next part of the Meditations?