Fuck AI

2256 readers
199 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
1
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

2
3
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

4
20
submitted 9 hours ago* (last edited 9 hours ago) by [email protected] to c/[email protected]
5
 
 

Source (Via Xcancel)

6
 
 

7
8
 
 

Source (Mastodon)

9
10
11
 
 

“Israel built an ‘AI factory’ for war. It unleashed it in Gaza,” laments the Washington Post. “Hospitals Are Reporting More Insurance Denials. Is AI Driving Them?,” reports Newsweek. “AI Raising the Rent? San Francisco Could Be the First City to Ban the Practice,” announces San Francisco’s KQED.

Within the last few years, and particularly the last few months, we’ve heard this refrain: AI is the reason for an abuse committed by a corporation, military, or other powerful entity. All of a sudden, the argument goes, the adoption of “faulty” or “overly simplified” AI caused a breakdown of normal operations: spikes in health insurance claims denials, the skyrocketing of consumer prices, the deaths of tens of thousands of civilians. If not for AI, it follows, these industries and militaries, in all likelihood, would implement fairer policies and better killing protocols.

We’ll admit: the narrative seems compelling at first glance. There are major dangers in incorporating AI into corporate and military procedures. But in these cases, the AI isn’t the culprit; the people making the decisions are. UnitedHealthcare would deny claims regardless of the tools at its disposal. Landlords would raise rents with or without automated software. The IDF would kill civilians no matter what technology was, or wasn’t, available to do so. So why do we keep hearing that AI is the problem? What’s the point of this frame and why is it becoming so common as a responsibility-avoidance framing?

On today’s episode, we’ll dissect the genre of “investigative” reporting on the dangers of AI, examining how it serves as a limited hangout, offering controlled criticism while ultimately shifting responsibility toward faceless technologies and away from powerful people.

Later on the show, we’ll be speaking with Steven Renderos, Executive Director of MediaJustice, a national racial justice organization that advances the media and technology rights of people of color. He is the creator and co-host, with the great Brandi Collins-Dexter, Bring Receipts, a politics and pop culture podcast and is executive producer of Revolutionary Spirits, a 4-part audio series on the life and martyrdom of Mexican revolutionary leader Francisco Madero.

12
13
 
 

Source (Bluesky)

14
15
 
 

The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year.

The giant financial news site Bloomberg "has been experimenting with using AI to help produce its journalism," reports the New York Times. But "It hasn't always gone smoothly."

While Bloomberg announced on January 15 that it would add three AI-generated bullet points at the top of articles as a summary, "The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year." (This Wednesday they published a "hallucinated" date for the start of U.S. auto tariffs, and earlier in March claimed president Trump had imposed tariffs on Canada in 2024, while other errors have included incorrect figures and incorrect attribution.)

Bloomberg is not alone in trying A.I. — many news outlets are figuring out how best to embrace the new technology and use it in their reporting and editing. The newspaper chain Gannett uses similar A.I.-generated summaries on its articles, and The Washington Post has a tool called "Ask the Post" that generates answers to questions from published Post articles. And problems have popped up elsewhere. Earlier this month, The Los Angeles Times removed its A.I. tool from an opinion article after the technology described the Ku Klux Klan as something other than a racist organization.

Bloomberg News said in a statement that it publishes thousands of articles each day, and "currently 99 percent of A.I. summaries meet our editorial standards...." The A.I. summaries are "meant to complement our journalism, not replace it," the statement added....

John Micklethwait, Bloomberg's editor in chief, laid out the thinking about the A.I. summaries in a January 10 essay, which was an excerpt from a lecture he had given at City St. George's, University of London. "Customers like it — they can quickly see what any story is about. Journalists are more suspicious," he wrote. "Reporters worry that people will just read the summary rather than their story." But, he acknowledged, "an A.I. summary is only as good as the story it is based on. And getting the stories is where the humans still matter."

A Bloomberg spokeswoman told the Times that the feedback they'd received to the summaries had generally been positive — "and we continue to refine the experience."

16
 
 
17
 
 

:( fuck it

18
 
 

While the title gave me a chuckle, it did have this to say...

The research, detailed in a paper titled “AI and the advent of the cyborg behavioral scientist,” tested how well popular generative AI models including OpenAI’s ChatGPT, Microsoft’s Copilot and Google’s Gemini could handle various stages of the research process.

[...]

What they discovered was a mixed bag of capabilities and limitations, presumably good news for research scientists wondering if AI will take their job.

19
20
 
 
21
 
 

Source (Mastodon)

It has sound.

22
23
24
 
 

cross-posted from: https://feddit.uk/post/26350554

The makers of comic book heroes from Dennis the Menace to Judge Dredd are banding together to take on their biggest enemy yet — AI copycats.

A newly formed trade association, Comic Book UK, will bring together companies such as DC Thomson, which publishes the Beano, and Rebellion Entertainment, which makes 2000AD.

Other members will include The Phoenix Comic, which has published the Bunny vs Monkey series, graphic novel company Avery Hill Publishing and Fable, a digital comics platform.

The group will lobby for government and investor recognition that UK comics are an important export industry and develop valuable intellectual property.

One of the most immediate issues will be securing the industry’s future as the UK government considers proposals to relax copyright laws to train AI models.

...

Comic Book UK says the industry produces hundreds of thousands of pages of comic book content every year and has extensive archives of historic content.

British publishers are behind some of the most recognisable comic characters in titles “enjoyed by hundreds of thousands of readers every week and graphic novels read by millions more each year”, it says. These characters are often used in films, TV programmes and video games. Comic book content is particularly valuable for generative AI training because it is both highly visual and narrative driven, it argues.

The group warns that exemption proposals are not feasible in practice and will fail to provide rights holders with appropriate control over and means to seek remuneration for the use of their content and IP in AI training.

This will inhibit the growth of the comics industry, it said.

Archive

25
view more: next ›