this post was submitted on 08 Jun 2025
789 points (95.8% liked)

Technology

71146 readers
2871 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

you are viewing a single comment's thread
view the rest of the comments
[–] Melvin_Ferd@lemmy.world -1 points 1 day ago* (last edited 1 day ago) (1 children)

What isn't there to gain?

Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.

We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.

Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.

[–] antonim@lemmy.dbzer0.com 4 points 22 hours ago* (last edited 22 hours ago) (1 children)

I have no idea what sort of AI you've used that could do any of this stuff you've listed. A program that doesn't reason won't expose logical fallacies with any rigour or refine anyone's ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it's completely divorced from how the stuff as it is currently works.

Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

That's a misguided view of how art is created. Supposed "brilliant ideas" are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don't make it visual, write a story or an essay.

Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

[–] Melvin_Ferd@lemmy.world -5 points 21 hours ago* (last edited 21 hours ago) (2 children)

Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


  1. Straw Man Fallacy

"Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

This misrepresents the original claim:

"AI can help create a framework at the very least so they can get their ideas down."

The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


  1. False Dichotomy

"If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


  1. Hasty Generalization

"Supposed 'brilliant ideas' are a dime a dozen..."

While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


  1. Appeal to Ridicule / Ad Hominem (Light)

"...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


  1. Tu Quoque / Whataboutism (Borderline)

"For now I see no particular benefits that the right-wing has obtained by using AI either..."

This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


Summary of Fallacies Identified:

Type Description

Straw Man Misrepresents the role of AI in creative assistance. False Dichotomy Assumes one must either be visually skilled or not attempt visual media. Hasty Generalization Devalues “brilliant ideas” universally. Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis. Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

[–] antonim@lemmy.dbzer0.com 6 points 21 hours ago (1 children)

Excellent, these "fallacies" are exactly as I expected - made up, misunderstanding my comment (I did not call SMBC "bad"), and overall just trying to look like criticism instead of being one. Completely worthless - but I sure can see why right wingers are embracing it!

It's funny how you think AI will help refine people's ideas, but you actually just delegated your thinking to it and let it do it worse than you could (if you cared). That's why I don't feel like getting any deeper into explaining why the AI response is garbage, I could just as well fire up GPT on my own and paste its answer, but it would be equally meaningless and useless as yours.

Saying it’ll be boring comics missed the entire point.

So what was the point exactly? I re-read that part of your comment and you're talking about "strong ideas", whatever that's supposed to be without any actual context?

Saying it is the same as google is pure ignorance of what it can do.

I did not say it's the same as Google, in fact I said it's worse than Google because it can add a hallucinated summary or reinterpretation of the source. I've tested a solid number of LLMs over time, I've seen what they produce. You can either provide examples that show that they do not hallucinate, that they have access to sources that are more reliable than what shows up on Google, or you can again avoid any specific examples, just expecting people to submit to the revolutionary tech without any questions, accuse me of complete ignorance and, no less, compare me with anti-immigrant crowds (I honestly have no idea what's supposed to be similar between these two viewpoints? I don't live in a country with particularly developed anti-immigrant stances so maybe I'm missing something here?).

The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.

"They’ve bought into the hype and need to justify it"? Are you sure you're talking about the anti-AI crowd here? Because that's exactly how one would describe a lot of the pro-AI discourse. Like, many pro-AI people literally BUY into the hype by buying access to better AI models or invest in AI companies, the very real hype is stoked by these highly valued companies and some of the richest people in the world, and the hype leads the stock market and the objectively massive investments into this field.

But actually those who "buy into the hype" are the average people who just don't want to use this tech? Huh? What does that have to do with the concept of "hype"? Do you think hype is simply any trend that doesn't agree with your viewpoints?

[–] Melvin_Ferd@lemmy.world -1 points 20 hours ago (1 children)

Hype flows in both directions. Right now the hype from most is finding issues with chatgpt. It did find the fallacies based on what it was asked to do. It worked as expected. You act like this is fire and forget. Given what this output gave me, I can easily keep working this to get better and better arguments. I can review the results and clarify and iterate. I did copy and paste just to show an example. First I wanted to be honest with the output and not modify it. Second is an effort thing. I just feel like you can't honestly tell me that within 10 seconds having that summary is not beneficial. I didn't supply my argument to the prompt, only yours. If I submitted my argument it would be better.

[–] antonim@lemmy.dbzer0.com 2 points 19 hours ago (1 children)

Right now the hype from most is finding issues with chatgpt

hype noun (1)

publicity

especially : promotional publicity of an extravagant or contrived kind

You're abusing the meaning of "hype" in order to make the two sides appear the same, because you do understand that "hype" really describes the pro-AI discourse much better.

It did find the fallacies based on what it was asked to do.

It didn't. Put the text of your comment back into GPT and tell it to argue why the fallacies are misidentified.

You act like this is fire and forget.

But you did fire and forget it. I don't even think you read the output yourself.

First I wanted to be honest with the output and not modify it.

Or maybe you were just lazy?

Personally I'm starting to find these copy-pasted AI responses to be insulting. It has the "let me Google that for you" sort of smugness around it. I can put in the text in ChatGPT myself and get the same shitty output, you know. If you can't be bothered to improve it, then there's absolutely no point in pasting it.

Given what this output gave me, I can easily keep working this to get better and better arguments.

That doesn't sound terribly efficient. Polishing a turd, as they say. These great successes of AI are never actually visible or demonstrated, they're always put off - the tech isn't quite there yet, but it's just around the corner, just you wait, just one more round of asking the AI to elaborate, just one more round of polishing the turd, just a bit more faith on the unbelievers' part...

I just feel like you can’t honestly tell me that within 10 seconds having that summary is not beneficial.

Oh sure I can tell you that, assuming that your argumentative goals are remotely honest and you're not just posting stupid AI-generated criticism to waste my time. You didn't even notice one banal way in which AI misinterpreted my comment (I didn't say SMBC is bad), and you'd probably just accept that misreading in your own supposed rewrite of the text. Misleading summaries that you have to spend additional time and effort double checking for these subtle or not so subtle failures are NOT beneficial.

[–] Melvin_Ferd@lemmy.world 0 points 17 hours ago

Ok let's give a test here. Let's start with understand logic. Give me a paragraph and let's see if it can find any logical fallacies. You can provide the paragraph. Only constraint is that the context has to exist within the paragraph.

[–] die444die@lemmy.world 6 points 21 hours ago (1 children)

Did you even read this garbage? It’s just words strung together without any meaning. The things it’s claiming show a fundamental lack of understanding of what it is responding to.

This didn’t prove your point at all, quite the opposite. And it wasted everyone’s time in the process. Good job, this was worthless.

[–] Melvin_Ferd@lemmy.world -2 points 20 hours ago (1 children)

I did and it was because it didn't have the previous context. But it did find the fallacies as present. Logic is literally what a chat AI is going. A human still needs to review the output but it did what it was asked. I don't know AI programming well. But I can say that logic is algorithmic. An AI has no problem parsing an argument and finding the fallacies. It's a tool like any other.

[–] antonim@lemmy.dbzer0.com 4 points 19 hours ago (1 children)

That was a roundabout way of admitting you have no idea what logic is or how LLMs work. Logic works with propositions regardless of their literal meaning, LLMs operate with textual tokens irrespective of their formal logical relations. The chatbot doesn't actually do the logical operations behind the scenes, it only produces the text output that looks like the operations were done (because it was trained on a lot of existing text that reflects logical operations in its content).

[–] Melvin_Ferd@lemmy.world 1 points 17 hours ago* (last edited 17 hours ago) (1 children)

This is why I said I wasn't sure how AI works behind the scenes. But I do know that logic isn't difficult. Just to not fuck around between us. I have a CS background. Only saying this because I think you may have it as well and we can save some time.

It makes sense to me that logic is something AI can parse easily. Logic in my mind is very easy if it can tokenize some text. Wouldn't the difficulty be if the AI has the right context.

[–] antonim@lemmy.dbzer0.com 0 points 5 hours ago* (last edited 5 hours ago) (1 children)

Logic requires abstracting the argumentative form from the literal linguistic content and then generalising it, just how like math is done properly when you work with numbers and not just with sentences such as "two apples and three apples is five apples" (such abstraction in practice allows far more powerful and widely applicable operations than dealing with individual linguistic expressions; if you've ever solved very complex truth trees you'll know how they allow streamlining and solutions that would be practically impossible to do if you had only the ordinary linguistic expression of the same problem). Logic doesn't operate with textual tokens but with logical propositions and operators. "Difficulty" is not a meaningful term here, a tool is either technically capable of doing something (more or less successfully) or it isn't.

That LLMs aren't capable of this sort of precision and abstraction is shown by the OP link as well as the simple fact that chatbots used to be extremely bad at math (which is now probably patched up by adding a proper math module, rather than relying on the base LLM - my assumption, at least).

As for trying more examples of looking for logical fallacies, I tried out three different types of text. Since you say context is important, it's best to take only the beginning of a text. One text I tried is the opening of the Wikipedia article on "history", which ChatGPT described like this: "The passage you've provided is an informative and largely neutral overview of the academic discipline of history. It doesn't make any strong arguments or persuasive claims, which are typically where logical fallacies appear." It then went on to nitpick about some details "for the sake of thorough analysis", but basically had no real complaints. Then I tried out the opening paragraph of Moby-Dick. That's a fictional text so it would be reasonable to reject analysing its logical solidity, as GPT already did with the WP article, but it still tried to wring out some "criticism" that occasionally shows how it misunderstands the text (just as it misunderstood a part of my comment above). Finally, I asked it to find the fallacies in the first four paragraphs of Descartes' Meditations on First Philosophy, which resulted in a criticism that was based on less logically rigid principles than the original text (accusing Descartes of the "slippery slope fallacy").

I'll post the full replies below.

[–] antonim@lemmy.dbzer0.com 1 points 5 hours ago (1 children)

This famous passage from the opening of Moby-Dick by Herman Melville is rich in literary style but not necessarily constructed as an argument, which makes identifying formal logical fallacies a nuanced task. That said, we can identify a few rhetorical or informal fallacies—not because the text is poorly reasoned, but because it makes some sweeping or questionable assumptions in a stylistically persuasive way. Here's a breakdown:


1. Hasty Generalization

"Almost all men in their degree, some time or other, cherish very nearly the same feelings towards the ocean with me."

  • Fallacy: The narrator generalizes his deeply personal and specific emotional reaction to the sea as something nearly all men experience.
  • Why it’s fallacious: No evidence is presented to support this broad claim. It is based purely on the narrator’s subjective experience and intuition.

2. False Cause (Post Hoc)

"It is a way I have of driving off the spleen and regulating the circulation."

  • Fallacy: Implies a causal relationship between going to sea and improved physical/emotional health.
  • Why it’s fallacious: While sailing may coincidentally improve his mood or circulation, the text offers no medical or psychological basis to prove a direct causal link.

3. Appeal to Emotion

"...especially whenever my hypos get such an upper hand of me, that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people’s hats off..."

  • Fallacy: Uses vivid emotional language to justify behavior or state of mind rather than logical reasoning.
  • Why it’s fallacious: It appeals to the reader’s empathy or amusement to legitimize the narrator’s melancholy and coping strategy, rather than offering a rational justification.

4. False Analogy

"With a philosophical flourish Cato throws himself upon his sword; I quietly take to the ship."

  • Fallacy: Compares going to sea with committing suicide as if they are equivalent responses to despair.
  • Why it’s fallacious: The analogy is emotionally compelling, but equating self-harm with a change in setting (boarding a ship) overlooks major differences in motive and consequence.

Summary:

Fallacy Example from Text Explanation
Hasty Generalization “Almost all men... cherish very nearly the same feelings...” Overgeneralizes personal feeling
False Cause “...driving off the spleen and regulating the circulation.” Implies unproven health benefits
Appeal to Emotion “...prevent me from... knocking people’s hats off.” Justifies behavior with emotion
False Analogy “Cato throws himself... I quietly take to the ship.” Equates suicide and sailing

Would you like me to rewrite the passage without the fallacies, or is this for literary analysis?

[–] antonim@lemmy.dbzer0.com 1 points 5 hours ago

Thank you! This passage is again drawn from René Descartes' Meditations on First Philosophy, and it outlines his method of radical doubt as a means to establish certainty in knowledge. While the reasoning is carefully crafted and influential in the history of philosophy, it still contains a few logical fallacies or reasoning weaknesses that can be critically assessed.


🔍 Identified Logical Fallacies and Weak Reasoning:


1. Hasty Generalization

"Whatever I have accepted until now as most true has come to me through my senses. But occasionally I have found that they have deceived me, and it is unwise to trust completely those who have deceived us even once."

  • Why it’s a fallacy: The fact that the senses occasionally deceive (e.g. in illusions, dreams, or misperceptions) does not justify rejecting them entirely.
  • Type: Hasty generalization – concluding that because something is sometimes unreliable, it must always be unreliable.

2. False Analogy

"...it is unwise to trust completely those who have deceived us even once."

  • Why it’s a fallacy: Comparing senses (which are not agents but biological faculties) to people who deceive is not a valid analogy. People deceive with intent; senses make errors unintentionally.
  • Type: False analogy – treating unlike things as if they were meaningfully similar.

3. Slippery Slope (Metaphorical)

"Once the foundations of a building have been undermined, the rest collapses of its own accord..."

  • Why it’s a fallacy: This implies that if some foundational beliefs are doubted, then all other beliefs must collapse, without demonstrating that all knowledge is entirely dependent on those foundations.
  • Type: Slippery slope – assuming that undermining one part inevitably leads to a total collapse.

4. Begging the Question (Circular Reasoning)

"My reason tells me... I should withhold assent from [beliefs] that are not completely certain and indubitable."

  • Why it’s questionable: Descartes is relying on reason to justify doubting everything else, including the faculties (like reason) he has not yet proven to be trustworthy. This risks begging the question, assuming the reliability of reason while attempting to prove everything else, including reason itself.
  • Type: Begging the question – assuming what you're trying to prove.

5. Inconsistency (Recognized, Not Fully Resolved)

"Yet although the senses sometimes deceive us... that doesn’t apply to my belief that I am here, sitting by the fire..."

  • Why it’s a fallacy: Here, Descartes briefly exempts ordinary sensory experiences (like feeling warmth, seeing paper) from radical doubt—despite having just declared that any doubt justifies withholding belief. He later corrects for this in the next meditation (introducing the dream argument), but within this passage, it’s a moment of inconsistency.
  • Type: Special pleading or internal inconsistency.

✅ Summary Table of Fallacies

Fallacy Type Description
Hasty Generalization Rejects all sensory data due to occasional errors.
False Analogy Compares unreliable senses to untrustworthy people.
Slippery Slope Claims that doubting foundations causes collapse of all beliefs.
Begging the Question Uses reason to justify doubting everything, including reason itself.
Inconsistency / Special Pleading Makes an exception for immediate sensory beliefs without justification.

Conclusion:

Descartes' passage is methodologically rich, but from a logic and reasoning perspective, it contains:

  • Several classic fallacies used to motivate extreme skepticism.
  • Analogical and metaphorical reasoning that may be rhetorically persuasive but logically flawed.
  • Self-referential assumptions that are philosophically problematic (and which he tries to resolve in later meditations).

Would you like a version of this reasoning revised to be more logically sound, or would you like help analyzing the next part of the Meditations?