this post was submitted on 09 Apr 2026
893 points (99.1% liked)

Science Memes

19845 readers
3600 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 3 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] BeMoreCareful@lemmy.world 13 points 7 hours ago (1 children)

Wait, so breaks containment means spreads misinformation? What timeline is this?

[–] FinalRemix@lemmy.world 1 points 9 minutes ago* (last edited 9 minutes ago)

It's a screenshot of a post on bsky. Don't read too much into the specifics of the language...

[–] Teppa@lemmy.world 21 points 8 hours ago

AI's dont know that birds arent real, or that sometimes the pressure from being under water for an extended period of time can cause fish to explode.

[–] squaresinger@lemmy.world 30 points 9 hours ago (1 children)
[–] HeyThisIsntTheYMCA@lemmy.world 6 points 6 hours ago

they do the same to protect doctors from malpractice lawsuits. there is a (laughably peer reviewed) study that claims tylenol and morphine are equally effective at pain management.

[–] Whats_your_reasoning@lemmy.world 14 points 10 hours ago (1 children)

“When the text looks professional and written as a doctor writes, there’s an increase in the hallucination rates,” says Omar.

Huh, now there’s something we have in common. Trying to make sense of something a doctor wrote makes me feel like I’m hallucinating, too. Is there a class in medical school on “Illegible Handwriting,” or is it just a coincidence?

In all seriousness though, I wish I could be surprised by AI failing at this. We have entered the Misinformation Age. There’s no closing Pandora’s Box, though this time I can’t find the “hope” that’s supposed to be in the bottom of it. Society would have to turn real skeptical real fast, but I’ve met enough people to know that such a tranformation is going to take time - and by “time” I mean “decades or longer.” With AI already here, we’d have to wise up immediately… but I fear that humanity isn’t mature enough for that yet.

[–] Jako302@feddit.org 4 points 5 hours ago

We've crossed the point where natural skepticism could've saved us months ago. Feedback loops of made up sources where a problem way before ai was a thing, but now you can be five sources deep, reading trough papers published by multiple different scientific magazines or universities, and still won't have found the actual data all the papers depend on cause there wasn't any in the first place.

And once a single one of these papers gets published, there will be about one million SEO articles on shitty clickbait websites that, in this case, would try to sell you a home remedy for your supposed illness. So searching for any useful information is pretty much off the table.

[–] sunnytimes@lemmy.ca 6 points 9 hours ago

ask the ai about a blue waffle

[–] Itwasntme223@discuss.online 2 points 8 hours ago

Why am I not surprised? >.>

[–] RagingRobot@lemmy.world 36 points 16 hours ago (2 children)

I wonder if we got a group together to go on reddit and stack overflow and give really wrong programming answers and vote them to the top, if Claude would start sucking? They could always just revert to a previous model and it would probably be too hard to get enough people and content to have an effect with such large training sets. Maybe if you use ai? Lol

[–] Napster153@lemmy.world 5 points 13 hours ago (2 children)

Didnn't something similar happen to Grok but ended up with it generating a ton of CSAM material that circulated twitter?

[–] kadotux@sopuli.xyz 16 points 12 hours ago (3 children)

Sorry for being that guy today for you, but you can just say CSAM. It stands for Child Sexual Abuse Material". smh my head :P

[–] portuga@lemmy.world 7 points 10 hours ago

Your last sentence saves you from being pedantic. Fun stuff, RIP in peace ✌️

[–] Uriel_Copy@lemmy.world 3 points 8 hours ago

Classic RAS syndrome! (Redundant Acronym Syndrome)

[–] Napster153@lemmy.world 3 points 11 hours ago (3 children)

Pardon, but what... I did say CSAM, may I ask what exactly you mean?

[–] dai@lemmy.world 13 points 11 hours ago (1 children)

Did you drop your ATM machine? 

[–] Dicska@lemmy.world 12 points 11 hours ago

Does it take small size compact CD discs?

[–] ITGuyLevi@programming.dev 11 points 11 hours ago

Some people, when they see an acronym, will replace it with the words it stands for in their head. A subset of that group of people get annoyed when the sentence gets all muddled up by repeated words; in this particular case, you said 'CSAM material', which their brain read as 'child sexual abuse material material'.

It isn't a big deal, but as one of those people, I totally get the urge to point it out (I've gotten pretty good at looking past it but it's still a bit of a compulsion).

They are referring to your use of "CSAM material" in your sentence.

[–] imjustmsk@lemmy.world 3 points 11 hours ago (1 children)

chain tea, coffee coffee,  cream cream. 

[–] Test_Tickles@lemmy.world 1 points 8 hours ago

Woo, woo, chugga, chugga, choo, choo

load more comments (1 replies)
[–] Arghblarg@lemmy.ca 47 points 17 hours ago (1 children)

Good. This shows plainly how LLMs don't think, don't truly understand anything, and have no critical ability to do introspection or fact-checking. It seems the only way to teach the world of these things is to make it impossible to ignore via absurd demonstrations like this. If the "AI" well must be poisoned in order to wake people up, I'm all for it.

[–] Teppa@lemmy.world 4 points 8 hours ago* (last edited 8 hours ago)

Isnt 80% of its data from Reddit anyways, seems quite poisoned already given the amount of confidently incorrect people.

With how Reddit is monetizing itself now I'd assume Lemmy actually becomes more widely used than Reddit however, since it should be totally free.

[–] DeathsEmbrace@lemmy.world 113 points 22 hours ago (1 children)

Before anyone shits on these scientists it said over and over again it was made up and that officially the USS Enterprise labs were used to make this discovery.

[–] Kacarott@aussie.zone 7 points 13 hours ago

The Federation would never publish fake data, so it must be true!

[–] magnue@lemmy.world 27 points 18 hours ago (2 children)

Wouldn't humans do the same thing if someone literally writes lies on the internet?

[–] Kacarott@aussie.zone 29 points 16 hours ago* (last edited 16 hours ago) (4 children)

If it were convincing lies made to deceive, then sure. But in this case the papers were deliberately made to be immediately obviously fake, to anyone actually reading them.

So I guess the question would be "would humans do the same thing if someone literally writes obvious jokes on the internet?"

[–] HylicManoeuvre@mander.xyz 12 points 13 hours ago

More shockingly, three Indian researchers published a research paper that cited the preprint on the fake disease in Cureus, a peer-reviewed journal published by Springer. It was subsequently retracted.

lol

[–] Honytawk@discuss.tchncs.de 4 points 10 hours ago

Looks at Flat-Earthers

Yes they would

[–] squaresinger@lemmy.world 2 points 9 hours ago* (last edited 9 hours ago) (1 children)

https://en.wikipedia.org/wiki/John_Bohannon#Intentionally_misleading_chocolate_study

Yes, people would exactly do the same, because nobody reads anything but the headline of a paper. Even journalists don't.

AI didn't invent the problem, but it put the problem on steroids.

[–] ExperiencedWinter@lemmy.world 2 points 9 hours ago* (last edited 9 hours ago) (2 children)

Even journalists don't

Not sure what point your making here, I wouldn't expect most journalists to be great at reading the details of papers like this...

[–] Test_Tickles@lemmy.world 4 points 8 hours ago

Research and fact checking is what separates journalists from hacks.
"Journalist" implies factual information, not science fiction. If someone writes a "news" story about the magic land of Xanth because they can't tell the difference between a Piers Anthony novel and a scientific study it's not Piers Anthony's fault for being too "tricky".

[–] squaresinger@lemmy.world 2 points 7 hours ago

Vetting sources is the one thing we need journalists for. If they don't vet their sources, their work is without merit.

Reading at least the methodology section of a paper and googling if the researchers and the institute exists, is the bare minimum of what a decent journalist should do.

If they can't do that, then there's no advantage of a journalist over some random person posting on Facebook. Even Youtubers usually vet their sources better.

load more comments (1 replies)
[–] Foofighter@discuss.tchncs.de 17 points 18 hours ago (2 children)

Absolutely! Once false information is out there it can't be retracted even if the article itself is retracted. Bumblebees can't fly and vaccines cause autism are good examples of that. The only difference i can imagine is that LLMs have a much larger reach and may spread shit faster

[–] SaveTheTuaHawk@lemmy.ca 5 points 10 hours ago

But the Lancet did not retract the Wakefield paper for 12 years. The Lancet should have been shut down for that.

[–] squaresinger@lemmy.world 1 points 9 hours ago

This. Here's a comparable case where human journalists did exactly what LLMs are doing now: https://en.wikipedia.org/wiki/John_Bohannon#Intentionally_misleading_chocolate_study

The difference is the scale.

[–] partial_accumen@lemmy.world 129 points 1 day ago (3 children)

I give you... "The Grant Money Printing machine!"

Need a grant? Create a disease and submit a paper. Then write a grant asking for money to solve your invented disease.

[–] Jankatarch@lemmy.world 6 points 10 hours ago* (last edited 10 hours ago)

If you want research grants there is already a glitch for that. You just jam "AI" in your research and suddenly government cares about progress now.

load more comments (2 replies)
[–] Blackout@fedia.io 58 points 22 hours ago (7 children)

Find a way to make AI hurt billionaires and I will support it.

load more comments (7 replies)
load more comments
view more: next ›