this post was submitted on 18 May 2025
184 points (93.8% liked)

Ask Lemmy

31729 readers
1960 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

top 50 comments
sorted by: hot top controversial new old
[–] mad_djinn@lemmy.world 2 points 32 minutes ago

force companies to pay for the data they scraped from copyrighted works. break up the largest tech conglomerates so they cannot leverage their monopolistic market positions to further their goals, which includes the investment in A.I. products.

ultimately, replace the free market (cringe) with a centralized computer system to manage resource needs of a socialist state

also force Elon Musk to receive a neuralink implant and force him to hallucinate the ghostly impressions of spongebob squarepants laughing for the rest of his life (in prison)

[–] Sunflier@lemmy.world 1 points 53 minutes ago* (last edited 20 minutes ago)

Disable all ai being on by default. Offer me a way to opt into having ai, but don't shove it down my throat by default. I don't want google ai listening in on my calls without having the option to disable it. I am an attorney, and many of my calls are privileged. Having a third party listen in could cause that privilege to be lost.

I want ai that is stupid. I live in a capitalist plutocracy that is replacing workers with ai as fast and hard as possible without having ubi. I live in the United States, which doesn't even have universal health insurance. So, ubi is fucked. This sets up the environment where a lot of people will be unemployable through no fault of their own because of ai. Thus without ubi, we're back to starvation and hoovervilles. But, fuck us. They got theirs.

[–] jjjalljs@ttrpg.network 17 points 6 hours ago

Other people have some really good responses in here.

I'm going to echo that AI is highlighting the problems of capitalism. The ownership class wants to fire a bunch of people and replace them with AI, and keep all that profit for themselves. Not good.

Legislation

[–] noxypaws@pawb.social 13 points 6 hours ago

Admittedly very tough question. Here are some of the ideas I just came up with:

Make it easier to hold people or organizations liable for mistakes made because of haphazard reliance on LLMs.

Reparations for everyone ever sued for piracy, and completely do away with intellectual privacy protections for corporations, but independent artists get to keep them.

Public service announcements campaign aimed at making the general public less trustful of LLMs.

Strengthen consumer protection such that baseless claims of AI capabilities in advertising or product labeling are legally dangerous to make.

Fine companies for every verifiably inaccurate result given to a customer or end user by an LLM

[–] SoftestSapphic@lemmy.world 7 points 6 hours ago (1 children)

I want the companies that run LLMs to be forced to pay for the copyrighted training data they stole to train their auto complete bots.

I want us to keep chipping away at actually creating REAL ARTIFICAL INTELLIGENCE, that can reason, understand self, and function autonomously, like living things. Marketing teams are calling everything AI but none of it is actually intelligent, it's just ok at sounding intelligent.

I want people to stop gaslighting themselves into thinking this autocomplete web searching bot is comparable to a human in any way. The difference between ChatGPT and Google's search congregation ML algorithm was the LLM on it that makes it sound like a person. But it only sounds like a person, it's nowhere close, but we have people falling in love and worshipping chat bots like gods.

Also the insane energy consumption makes it totally unsustainable.

TL;DR- AI needs to be actually intelligent, not marketing teams gaslighting us. People need to be taught that these things are nowhere close to human and won't be for a very long time despite it parroting human speech. And they are rapidly destroying the planet.

[–] jjjalljs@ttrpg.network 3 points 6 hours ago

I really don't think creating for real artificial intelligence is a good idea. I mean that's peak "don't invent the torment Nexus"

Are you going to give it equal rights? How is voting going to work when the AI can create an arbitrary number of itself and vote as a bloc?

Creating an intelligent being to be your slave is fucked up, too.

Just... We don't need that right now. We have other more pressing problems with fewer ethical land mines

[–] wetbeardhairs@lemmy.dbzer0.com 10 points 7 hours ago

I want the LLMs to be able to determine their source works during the query process to be able to pay the source copyright owners some amount. That way if you generate a Ms Piggy image, it pays the Henson Workshop some fraction of a penny. Eventually it would add up.

[–] Bwaz@lemmy.world 14 points 9 hours ago

I'd like there to be a web-wide expectation by everyone that any AI generated text, comment, story or image be clearly marked as being AI. That people would feel incensed and angry when it wasn't labeled so. Rather than wondering whether there were a person with a soul producing the content, or losing faith that real info could be found online.

[–] Soapbox1858@lemm.ee 4 points 8 hours ago

I think many comments have already nailed it.

I would add that while I hate the use of LLMs to completely generate artwork, I don't have a problem with AI enhanced editing tools. For example, AI powered noise reduction for high ISO photography is very useful. It's not creating the content. Just helping fix a problem. Same with AI enhanced retouching to an extent. If the tech can improve and simplify the process of removing an errant power line, dust spec, or pimple in a photograph, then it's great. These use cases help streamline otherwise tedious bullshit work that photographers usually don't want to do.

I also think it's great hearing about the tech is improving scientific endeavors. Helping to spot cancers etc. As long as it is done ethically, these are great uses for it.

[–] Furbag@lemmy.world 28 points 13 hours ago (1 children)

Long, long before this AI craze began, I was warning people as a young 20-something political activist that we needed to push for Universal Basic Income because the inevitable march of technology would mean that labor itself would become irrelevant in time and that we needed to hash out a system to maintain the dignity of every person now rather than wait until the system is stressed beyond it's ability to cope with massive layoffs and entire industries taken over by automation/AI. When the ability of the average person to sell their ability to work becomes fundamentally compromised, capitalism will collapse in on itself - I'm neither pro- nor anti-capitalist, but people have to acknowledge that nearly all of western society is based on capitalism and if capitalism collapses then society itself is in jeopardy.

I was called alarmist, that such a thing was a long way away and we didn't need "socialism" in this country, that it was more important to maintain the senseless drudgery of the 40-hour work week for the sake of keeping people occupied with work but not necessarily fulfilled because the alternative would not make the line go up.

Now, over a decade later, and generative AI has completely infiltrated almost all creative spaces and nobody except tech bros and C-suite executives are excited about that, and we still don't have a safety net in place.

Understand this - I do not hate the idea of AI. I was a huge advocate of AI, as a matter of fact. I was confident that the gradual progression and improvement of technology would be the catalyst that could free us from the shackles of the concept of a 9-to-5 career. When I was a teenager, there was this little program you could run on your computer called Folding At Home. It was basically a number-crunching engine that uses your GPU to fold proteins, and the data was sent to researchers studying various diseases. It was a way for my online friends and I to flex how good our PC specs were with the number of folds we could complete in a given time frame and also we got to contribute to a good cause at the same time. These days, they use AI for that sort of thing, and that's fucking awesome. That's what I hope to see AI do more of - take the rote, laborious, time consuming tasks that would take one or more human beings a lifetime to accomplish using conventional tools and have the machine assist in compiling and sifting through the data to find all the most important aspects. I want to see more of that.

I think there's a meme floating around that really sums it up for me. Paraphrasing, but it goes "I thought that AI would do the dishes and fold my laundry so I could have more time for art and writing, but instead AI is doing all my art and writing so I have time to fold clothes and wash dishes.".

I think generative AI is both flawed and damaging, and it gives AI as a whole a bad reputation because generative AI is what the consumer gets to see, and not the AI that is being used as a tool to help people make their lives easier.

Speaking of that, I also take issue with that fact that we are more productive than ever before, and AI will only continue to improve that productivity margin, but workers and laborers across the country will never see a dime of compensation for that. People might be able to do the work of two or even three people with the help of AI assistants, but they certainly will never get the salary of three people, and it means that two out of those three people probably don't have a job anymore if demand doesn't increase proportionally.

I want to see regulations on AI. Will this slow down the development and advancement of AI? Almost certainly, but we've already seen the chaos that unfettered AI can cause to entire industries. It's a small price to pay to ask that AI companies prove that they are being ethical and that their work will not damage the livelihood of other people, or that their success will not be born off the backs of other creative endeavors.

[–] Witchfire@lemmy.world 9 points 9 hours ago* (last edited 9 hours ago) (1 children)

Fwiw, I've been getting called an alarmist for talking about Trump's and Republican's fascist tendencies since at least 2016, if not earlier. I'm now comfortably living in another country.

My point being that people will call you an alarmist for suggesting anything that requires them to go out of their comfort zone. It doesn't necessarily mean you're wrong, it just shows how stupid people are.

[–] Ziro427@lemmy.world 1 points 1 hour ago (1 children)

Did you move overseas? And if you did, was it expensive to move your things?

[–] Witchfire@lemmy.world 1 points 1 hour ago

It wasn't overseas but moving my stuff was expensive, yes. Even with my company paying a portion of it. It's just me and my partner in a 2br apartment so it's honestly not a ton of stuff either.

[–] kandoh@reddthat.com 11 points 14 hours ago

My issue is that the c-levels and executives see it as a way of eliminating one if their biggest costs - labour.

They want their educated labour reduced by three quarters. They want me doing the jobs of 4 people with the help of AI, and they want to pay me less than they already are.

What I would like is a universal basic income paid for by taxing the shit out of the rich.

[–] Saleh@feddit.org 21 points 16 hours ago* (last edited 16 hours ago) (5 children)

First of all stop calling it AI. It is just large language models for the most part.

Second: immediate carbon tax in line with the current damage expectations for emissions on the energy consumption of datacenters. That would be around 400$/tCO2 iirc.

Third: Make it obligatory by law to provide disclaimers about what it is actually doing. So if someone asks "is my partner cheating on me". The first message should be "this tool does not understand what is real and what is false. It has no actual knowledge of anything, in particular not of your personal situation. This tool just puts words together that seem more likely to belong together. It cannot give any personal advice and cannot be used for any knowledge gain. This tool is solely to be used for entertainment purposes. If you use the answers of this tool in any dangerous way, such as for designing machinery, operating machinery, financial decisions or similar you are liable for it yourself."

[–] themaninblack@lemmy.world 8 points 16 hours ago* (last edited 16 hours ago)

Agreed LLMs for mass consumption should come with some disclaimer

load more comments (4 replies)
[–] daniskarma@lemmy.dbzer0.com 19 points 16 hours ago

I'm not against it as a technology. I use it for my personal use, as a toy, to have some fun or to whatever.

But what I despise is the forced introduction everything. AI written articles and AI forced assistants in many unrelated apps. That's what I want to disappear, how they force in lots of places.

[–] boaratio@lemmy.world 17 points 16 hours ago* (last edited 16 hours ago)

For it to go away just like Web 3.0 and NFTs did. Stop cramming it up our asses in every website and application. Make it opt in instead of maybe if you're lucky, opt out. And also, stop burning down the planet with data center power and water usage. That's all.

Edit: Oh yeah, and get sued into oblivion for stealing every copyrighted work known to man. That too.

Edit 2: And the tech press should be ashamed for how much they've been fawning over these slop generators. They gladly parrot press releases, claim it's the next big thing, and generally just suckle at the teet of AI companies.

[–] Taleya@aussie.zone 24 points 17 hours ago* (last edited 17 hours ago) (1 children)

What do I really want?

Stop fucking jamming it up the arse of everything imaginable. If you asked for a genie wish, make it it illegal to be anything but opt in.

load more comments (1 replies)
[–] kittenzrulz123@lemmy.blahaj.zone 20 points 17 hours ago

I do not need AI and I do not want AI, I want to see it regulated to the point that it becomes severly unprofitable. The world is burning and we are heading face first towards a climate catastrophe (if we're not already there), we DONT need machines to mass produce slop.

[–] mesamunefire@piefed.social 4 points 12 hours ago

I think its important to figure out what you mean by AI?

Im thinking a majority of people here are talking about LLMs BUT there are other AIs that have been quietly worked on that are finally making huge strides.

AI that can produce songs (suno) and replicate voices. AI that can reproduce a face from one picture (theres a couple of github repos out there). When it comes to the above we are dealing with copyright infringement AI, specifically designed and trained on other peoples work. If we really do have laws coming into place that will deregulate AI, then I say we go all in. Open source everything (or as much as possible) and make it so its trained on all company specific info. And let anyone run it. I have a feeling we cant put he genie back in the bottle.

If we have pie in the sky solutions, I would like a new iteration of the web. One that specially makes it difficult or outright impossible to pull into AI. Something like onion where it only accepts real nodes/people in ingesting the data.

[–] RandomVideos@programming.dev 4 points 12 hours ago* (last edited 12 hours ago)

It would be amazing if chat and text generation suddenly disappeared, but thats not going to happen

It would be cool to make it illegal to not mark AI generated images or text and not have them forced to be seen

[–] OTINOKTYAH@feddit.org 8 points 15 hours ago

Not destroying but being real about it.

It's flawed like hell and feeling like a hype to save big tech companies, while the the enduser getting a shitty product. But companies shoving it into apps and everything, even if it degrades the user expierence (Like Duolingo)

Also, yes there need laws for that. I mean, If i download something illegaly i will nur put behind bars and can kiss my life goodbye. If a megacorp doing that to train their LLM "it's for the greater good". That's bullshit.

[–] LoveSausage@discuss.tchncs.de 16 points 18 hours ago (1 children)

Destroy capitalism. That's the issue here. All AI fears stem from that.

[–] 4am@lemm.ee 10 points 17 hours ago (3 children)
  • Trained on stolen ideas: ✅
  • replacing humans who have little to no safety net while enriching an owner class: ✅
  • disregard for resource allocation, use, and pollution in the pursuit of profit: ✅
  • being forced into everything as to become unavoidable and foster dependence: ✅

Hey wow look at that, capitalism is the fucking problem again!

God we are such pathetic gamblemonkeys, we cannot get it together.

load more comments (3 replies)
[–] detun3d@lemm.ee 8 points 17 hours ago

Gen AI should be an optional tool to help us improve our work and life, not an unavoidable subscription service that makes it all worse and makes us dumber in the process.

[–] umbraroze@slrpnk.net 30 points 22 hours ago (1 children)

The technology side of generative AI is fine. It's interesting and promising technology.

The business side sucks and the AI companies just the latest continuation of the tech grift. Trying to squeeze as much money from latest hyped tech, laws or social or environmental impact be damned.

We need legislation to catch up. We also need society to be able to catch up. We can't let the AI bros continue to foist more "helpful tools" on us, grab the money, and then just watch as it turns out to be damaging in unpredictable ways.

load more comments (1 replies)
[–] DeathsEmbrace@lemm.ee 10 points 18 hours ago (1 children)

Ruin the marketing. I want them to stop using the key term AI and use the appropriate terminology narrow minded AI. It needs input so let's stop making up fantasy's about AI it's bullshit in truth.

[–] Opinionhaver@feddit.uk 1 points 6 minutes ago

The term artificial intelligence is broader than many people realize. It doesn't refer to a single technology or a specific capability, but rather to a category of systems designed to perform tasks that would normally require human intelligence. That includes everything from pattern recognition, language understanding, and problem-solving to more specific applications like recommendation engines or image generation.

When people say something "isn't real AI," they’re often working from a very narrow or futuristic definition - usually something like human-level general intelligence or conscious reasoning. But that's not how the term has been used in computer science or industry. A chess-playing algorithm, a spam filter, and a large language model can all fall under the AI umbrella. The boundaries of AI shift over time: what once seemed like cutting-edge intelligence often becomes mundane as we get used to it.

So rather than being a misleading or purely marketing term, AI is just a broad label we’ve used for decades to describe machines that do things we associate with intelligent behavior. The key is to be specific about which kind of AI we’re talking about - like "machine learning," "neural networks," or "generative models" - rather than assuming there's one single thing that AI is or isn't.

load more comments
view more: next ›