this post was submitted on 10 May 2025
224 points (98.7% liked)

Fuck AI

4969 readers
1405 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation. On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers. "Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs," write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke's Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled "Evidence of a social evaluation penalty for using AI," reveal a consistent pattern of bias against those who receive help from AI. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn't limited to specific groups.

all 15 comments
sorted by: hot top controversial new old
[–] jjjalljs@ttrpg.network 51 points 7 months ago (1 children)

I mean, if someone came to me and was like "Hey I wrote that paper for you. But I got my idiot little brother to do most of the work, and then I fixed it up," I would also look askance at them.

[–] pulsewidth@lemmy.world 7 points 7 months ago* (last edited 7 months ago)

Upvote as thanks for teaching me a new word

[–] Maeve@kbin.earth 38 points 7 months ago
[–] Bob_Robertson_IX@discuss.tchncs.de 20 points 7 months ago (1 children)

I have a coworker who didn't learn English until his mid-20s, and it was his 3rd language. He's very hard to understand and is functionally illiterate in English, which is unfortunate because most of our job is done through email or chat. Sometime will send him an email or chat with a request and he will respond with "Call you" and then immediately call them. They hate it because they have a hard time understanding him, and they never get anything in writing from him.

I suggested that he start using a company provided LLM to take what he wants to write and have it rewrite it for him (or he can write in it one of the other languages he knows better and have it translated). He's started doing this and his performance at work has completely turned around. He is a shining example for how an LLM can be properly used.

Then, there's the VPs in the company who send out emails that have been obviously completely written by an LLM. And they brag about asking an LLM for ideas on how to handle certain situations, or the direction that the department needs to head in. They have outsourced their brains and think it was a brilliant move. They are the ones who deserve scorn.

[–] Zagorath@aussie.zone 10 points 7 months ago (2 children)

How is that first example better than traditional Google Translate?

Because with Google Translate he would be sending privileged company information to Google. With our LLM it all stays in-house. And he does typically write his replies in his broken English and the LLM fixes it to make it more readable, which helps him improve his written English skills.

[–] takeda@lemm.ee 12 points 7 months ago

It definitively makes me look down on the coworkers that do use it.

They seem to stop thinking and do what AI recommends, their code is also a complicated mess.

[–] supersquirrel@sopuli.xyz 11 points 7 months ago* (last edited 7 months ago)

Their findings, presented in a paper titled "Evidence of a social evaluation penalty for using AI," reveal a consistent pattern of bias against those who receive help from AI. What made this penalty particularly concerning for researchers was its consistency across demographics. They found that the social stigma against AI use wasn't limited to specific groups.

Their findings, presented in a paper titled "Evidence of a social evaluation penalty for being an idiot" reveal a consistent pattern of bias against those who believe in dumb marketing hype sold by the rich to destroy the middle class, push the desperate faces of artists into the mud even more and use a world ending amount of energy to answer questions badly and manipulate public opinion to be stupider and more hateful. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn't limited to specific groups because unlike techbros and people working in marketing, normal people understand this is all mostly a bunch of bullshit and that inveitably if there are parts to it that aren't bullshit large US corporations sure as hell aren't going to be able to discern them from all the snakeoil salesman nonsense any better than their crazy uncle who believes the world is flat can tell what is real and what isn't.

As researchers ultimately funded by and wholy onboard with the framework of this kind of technology we are concerned we will have no job in the future if people realize how toxic all of this is, we bravely use the intellectual prestige and power we wield as Duke academics to demand corporations and silicon valley be better at obscuring the harm and nonsense at the heart of AI so we can continue to study it and make wishy-washy statements about AI while the status quo continues to enshittify.

[–] Catoblepas@lemmy.blahaj.zone 7 points 7 months ago (1 children)

Why would any scientist look down on anything as beautiful as this?

[–] jlow@discuss.tchncs.de 6 points 7 months ago

No! You don't say! Making a machine do something that kind of looks like what you do but is mostly garbage isn't professional?

[–] markovs_gun@lemmy.world 2 points 7 months ago

If I had an employee who just gave me AI shit I would wonder what I am paying them for since theoretically I could just cut out the middle man for like $40 a month for premium ChatGPT, especially if they weren't even trying to spice it up with their own work.

[–] Emptiness@lemmy.world 1 points 7 months ago (1 children)

"Here are the technical points I am going to implement in my IT service area and in the tactical order they are going to be done. Please dumb this down for me so that a management group can understand it and approve it."

I don't have time to "explain it like you're five", I have real work to do. Judge me all you like.

[–] Zacryon@feddit.org 2 points 7 months ago

Explaining what you're doing to others, even from other departments, might be part of your job and the "real work" you do.