this post was submitted on 13 Mar 2025
1882 points (99.7% liked)

People Twitter

6764 readers
985 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] spankmonkey@lemmy.world 164 points 4 weeks ago (2 children)

I love that this mirrors the experience of experts on social media like reddit, which was used for training chatgpt...

[–] skillissuer@discuss.tchncs.de 68 points 4 weeks ago (2 children)
[–] jjjalljs@ttrpg.network 14 points 4 weeks ago

i was going to post this, too.

The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

load more comments (1 replies)
[–] PM_Your_Nudes_Please@lemmy.world 44 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

Also common in news. There’s an old saying along the lines of “everyone trusts the news until they talk about your job.” Basically, the news is focused on getting info out quickly. Every station is rushing to be the first to break a story. So the people writing the teleprompter usually only have a few minutes (at best) to research anything before it goes live in front of the anchor. This means that you’re only ever going to get the most surface level info, even when the talking heads claim to be doing deep dives on a topic. It also means they’re going to be misleading or blatantly wrong a lot of the time, because they’re basically just parroting the top google result regardless of accuracy.

[–] ChickenLadyLovesLife@lemmy.world 11 points 4 weeks ago (5 children)

One of my academic areas of expertise way back in the day (late '80s and early '90s) were the so-called "Mitochondrial Eve" and "Out of Africa" hypotheses. The absolute mangling of this shit by journalists even at the time was migraine-inducing and it's gotten much worse in the decades since then. It hasn't helped that subsequent generations of scholars have mangled the whole deal even worse. The only advice I can offer people is that if the article (scholastic or popular) contains the word "Neanderthal" anywhere, just toss it.

load more comments (1 replies)
[–] SirSamuel@lemmy.world 79 points 4 weeks ago (1 children)

First off, the beauty of these two posts being beside each other is palpable.

Second, as you can see on the picture, it's more like 60%

[–] morrowind@lemmy.ml 25 points 4 weeks ago (1 children)

No it's not. If you actually read the study, it's about AI search engines correctly finding and citing the source of a given quote, not general correctness, and not just the plain model

[–] SirSamuel@lemmy.world 29 points 4 weeks ago

Read the study? Why would i do that when there's an infographic right there?

(thank you for the clarification, i actually appreciate it)

[–] DudeImMacGyver@kbin.earth 56 points 4 weeks ago (1 children)
[–] jsomae@lemmy.ml 34 points 4 weeks ago (8 children)

ChatGPT is a tool. Use it for tasks where the cost of verifying the output is correct is less than the cost of doing it by hand.

[–] qarbone@lemmy.world 18 points 4 weeks ago (1 children)

Honestly, I've found it best for quickly reformatting text and other content. It should live and die as a clerical tool.

load more comments (1 replies)
load more comments (7 replies)
[–] RabbitBBQ@lemmy.world 33 points 4 weeks ago (2 children)

If the standard is replicating human level intelligence and behavior, making up shit just to get you to go away about 40% of the time kind of checks out. In fact, I bet it hallucinates less and is wrong less often than most people you work with

[–] Devanismyname@lemmy.ca 10 points 4 weeks ago

And it just keeps improving over time. People shit all over ai to make themselves feel better because scary shit is happening.

load more comments (1 replies)
[–] foxlore@programming.dev 26 points 4 weeks ago (2 children)

Talking with an AI model is like talking with that one friend, that is always high that thinks they know everything. But they have a wide enough interest set that they can actually piece together an idea, most of the time wrong, about any subject.

[–] dagger_punch@lemmy.world 22 points 4 weeks ago

Isn't this called "the Joe Rogan experience"?

load more comments (1 replies)
[–] PartiallyApplied@lemmy.world 18 points 4 weeks ago* (last edited 4 weeks ago) (4 children)

I feel this hard with the New York Times.

99% of the time, I feel like it covers subjects adequately. It might be a bit further right than me, but for a general US source, I feel it’s rather representative.

Then they write a story about something happening to low income US people, and it’s just social and logical salad. They report, it appears as though they analytically look at data, instead of talking to people. Statisticians will tell you, and this is subtle: conclusions made at one level of detail cannot be generalized to another level of detail. Looking at data without talking with people is fallacious for social issues. The NYT needs to understand this, but meanwhile they are horrifically insensitive bordering on destructive at times.

“The jackboot only jumps down on people standing up”

  • Hozier, “Jackboot Jump”

Then I read the next story and I take it as credible without much critical thought or evidence. Bias is strange.

[–] CancerMancer@sh.itjust.works 10 points 4 weeks ago (1 children)
load more comments (1 replies)
load more comments (3 replies)
[–] RedSnt@feddit.dk 14 points 4 weeks ago* (last edited 4 weeks ago) (11 children)

I've been using o3-mini mostly for ffmpeg command lines. And a bit of sed. And it hasn't been terrible, it's a good way to learn stuff I can't decipher from the man pages. Not sure what else it's good for tbh, but at least I can test and understand what it's doing before running the code.

load more comments (11 replies)
[–] Hikermick@lemmy.world 14 points 4 weeks ago (4 children)

I did a google search to find out how much i pay for water, the water department where I live bills by the MCF (1,000 cubic feet). The AI Overview told me an MCF was one million cubic feet. It's a unit of measurement. It's not subjective, not an opinion and AI still got it wrong.

[–] TonyTonyChopper@mander.xyz 11 points 4 weeks ago (8 children)

Everywhere else in the world a big M means million.

load more comments (8 replies)
load more comments (3 replies)
[–] Zachariah@lemmy.world 11 points 4 weeks ago

This, but for tech bros.

[–] Kolanaki@pawb.social 10 points 4 weeks ago (1 children)

Most of my searches have to do with video games, and I have yet to see any of those AI generated answers be accurate. But I mean, when the source of the AI's info is coming from a Fandom wiki, it was already wading in shit before it ever generated a response.

load more comments (1 replies)
[–] Korhaka@sopuli.xyz 10 points 4 weeks ago (2 children)

I just use it to write emails, so I declare the facts to the LLM and tell it to write an email based on that and the context of the email. Works pretty well but doesn't really sound like something I wrote, it adds too much emotion.

[–] jjjalljs@ttrpg.network 8 points 4 weeks ago (1 children)

That sounds like more work than just writing the email to me

load more comments (1 replies)
load more comments (1 replies)
[–] balderdash9@lemmy.zip 9 points 4 weeks ago (1 children)

Deepseek is pretty good tbh. The answers sometimes leave out information in a way that is misleading, but targeted follow up questions can clarify.

[–] spankmonkey@lemmy.world 51 points 4 weeks ago* (last edited 4 weeks ago) (3 children)

Like leaving out what happened in Tiananmen Square in 1989?

[–] heavydust@sh.itjust.works 18 points 4 weeks ago (7 children)

You must be more respectful of all cultures and opinions.

[–] JusticeForPorygon@lemmy.blahaj.zone 18 points 4 weeks ago (3 children)

The amount of people who don't realize this is satire reminds me of old Reddit

[–] spankmonkey@lemmy.world 7 points 4 weeks ago* (last edited 4 weeks ago)

Is it though? I really can't tell.

Poe's law has been working overtime recently.

Edut: saw a comment further down that it is a default deepseek response for censored content, so yeah a joke. People who don't have that context aren't going to get the joke.

load more comments (2 replies)
[–] Remember_the_tooth@lemmy.world 7 points 4 weeks ago (2 children)

Is this a reference I'm not getting? Otherwise, I feel like censorship of massacre is not moraly acceptable regardless of culture. I'll leave this here so this doesn't get mistaken for nationalism:

https://en.m.wikipedia.org/wiki/List_of_massacres_in_the_United_States

It's by no means a comprehensive list, but more of a primer. We do not forget these kinds of things in the hope that we may prevent future occurrences.

[–] heavydust@sh.itjust.works 8 points 4 weeks ago (3 children)

It's a fucking joke FFS. It's the standard response from Deepseek.

[–] Remember_the_tooth@lemmy.world 9 points 4 weeks ago

Oh, gotcha. Yeah, I'm not on board with that. Thanks for clarifying. I thought you were being sincere for a moment. This is good satire. Carry on, please.

load more comments (2 replies)
load more comments (1 replies)
[–] Geometrinen_Gepardi@sopuli.xyz 7 points 4 weeks ago (5 children)

In my opinion it should have been the politburo that was pureed under tank tracks and hosed down into the sewers instead of those students.

load more comments (5 replies)
load more comments (4 replies)
[–] SkyeStarfall@lemmy.blahaj.zone 7 points 4 weeks ago

You can get an uncensored local version running if you got the hardware at least

load more comments (1 replies)
[–] aceshigh@lemmy.world 9 points 4 weeks ago (1 children)

I use chatgpt as a suggestion. Like an aid to whatever it is that I’m doing. It either helps me or it doesn’t, but I always have my critical thinking hat on.

load more comments (1 replies)
[–] lowside@lemmy.world 7 points 4 weeks ago (1 children)

One thing I have found it to be useful for is changing the tone if what I write.

I tend to write very clinicaly because my job involves a lot of that style of writing. I have started asked chat gpt to rephrase what i write in a softer tone.

Not for everything, but for example when Im texting my girlfriend who is feeling insecure. It has helped me a lot! I always read thrugh it to make sure it did not change any of the meaning or add anything, but so far it has been pretty good at changing the tone.

Also use it to rephrase emails at work to make it sound more professional.

load more comments (1 replies)
load more comments