this post was submitted on 18 May 2025
177 points (93.6% liked)

Ask Lemmy

31729 readers
2009 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] CCAirWater@lemm.ee 1 points 10 hours ago* (last edited 10 hours ago)

Our current 'AI' is not AI. It is not.

It is a corporate entity to shirk labor costs and lie to the public.

It is an algorithm designed to lie and the shills who made it are soulless liars, too.

It only exists for corporations and people to cut corners and think they did it right because of the lies.

And again, it is NOT artificial intelligence by the standard I hold to myself.

And it pisses me off to no fucking end.

I personally would love an AI personal assistant that wasn't tied to a corporation listening to every fkin thing I say or do. I would absolutely love it.

I'm a huge Sci-Fi fan, so sure I fear it to a degree. But, if I'm being honest, AI would be amazing if it could analyze how I learned math wrong as a kid and provide ways to fix it. It would be amazing if it could help me routinely create schedules for exercise and food and grocery lists with steps to cook and how all of those combine to effect my body. It would be fantastic if it could point me to novels and have a critical debate about the inner works with a setting of being a contrarian or not so I can seek to deeply understand the novels.

It sounds like what our current state of AI has right? No. The current state is a lying machine. It cannot have critical thought. Sure, it can give me a schedule of food/exercise, but it might tell me I need to lift 400lbs and eat a thousand turkeys to meet a goal of being 0.02grams heavy. It might tell me 5+7 equals 547,032.

It doesn't know what the fuck it's talking about!

Like, ultimately, I want a machine friend who pushes me to better myself and helps me understand my own shortcomings.

I don't want a lying brick bullshit machine that gives me all the answers but they are all wrong because it's just a guesswork framework full of 'whats the next best word?'

Edit: and don't even get me fucking started on the shady practices of stealing art. Those bastards trained it on people's hard work and are selling it as their own. And it can't even do it right, yet people are still buying it and using it at every turn. I don't want to see another shitty doodle with 8 fingers and overly contrasted bullshit in an ad or in a video game. I don't want to ever hear that fucking computer voice on YouTube again. I stopped using shortform videos because of how fucking annoying that voice is. It's low effort nonsense and infuriates the hell out of me.

[–] Goldholz@lemmy.blahaj.zone 11 points 17 hours ago

Shutting these "AI"s down. The once out for the public dont help anyone. They do more damage then they are worth.

[–] Fleur_@aussie.zone 22 points 21 hours ago (1 children)

Idrc about ai or whatever you want to call it. Make it all open source. Make everything an ai produces public domain. Instantly kill every billionaire who's said the phrase "ai" and redistribute their wealth.

[–] danciestlobster@lemm.ee 1 points 12 hours ago

Ya know what? Forget the ai criteria let's just have this for all billionaires

[–] traches@sh.itjust.works 19 points 21 hours ago (2 children)

I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.

[–] Kissaki@feddit.org 5 points 18 hours ago

Have you tried filtering, translating, or summarizing your inbox through AI? /s

load more comments (1 replies)
[–] Paradachshund@lemmy.today 130 points 1 day ago (11 children)

If we're going pie in the sky I would want to see any models built on work they didn't obtain permission for to be shut down.

Failing that, any models built on stolen work should be released to the public for free.

[–] pelespirit@sh.itjust.works 49 points 1 day ago

This is the best solution. Also, any use of AI should have to be stated and watermarked. If they used someone's art, that artist has to be listed as a contributor and you have to get permission. Just like they do for every film, they have to give credit. This includes music, voice and visual art. I don't care if they learned it from 10,000 people, list them.

load more comments (10 replies)
[–] naught101@lemmy.world 31 points 22 hours ago

TBH, it's mostly the corporate control and misinformation/hype that's the problem. And the fact that they can require substantial energy use and are used for such trivial shit. And that that use is actively degrading people's capacity for critical thinking.

ML in general can be super useful, and is an excellent tool for complex data analysis that can lead to really useful insights..

So yeah, uh... Eat the rich? And the marketing departments. And incorporate emissions into pricing, or regulate them to the point where it only becomes viable to non-trivial use cases.

[–] Witchfire@lemmy.world 13 points 19 hours ago* (last edited 19 hours ago) (1 children)

I'm perfectly ok with AI, I think it should be used for the advancement of humanity. However, 90% of popular AI is unethical BS that serves the 1%. But to detect spoiled food or cancer cells? Yes please!

It needs extensive regulation, but doing so requires tech literate politicians who actually care about their constituents. I'd say that'll happen when pigs fly, but police choppers exist so idk

load more comments (1 replies)
[–] Vanilla_PuddinFudge@infosec.pub 1 points 11 hours ago

I'm beyond the idea that there could or would be any worldwide movement against Ai, or much of anything if we're comparing healthcare, welfare and education reform. People are tuned out and numb.

[–] Glitch@lemmy.dbzer0.com 11 points 20 hours ago

I don't dislike ai, I dislike capitalism. Blaming the technology is like blaming the symptom instead of the disease. Ai just happens to be the perfect tool to accelerate that

[–] Jeffool@lemmy.world 36 points 1 day ago

Like a lot of others, my biggest gripe is the accepted copyright violation for the wealthy. They should have to license data (text, images, video, audio,) for their models, or use material in the public domain. With that in mind, in return I'd love to see pushes to drastically reduce the duration of copyright. My goal is less about destroying generative AI, as annoying as it is, and more about leveraging the money being it to change copyright law.

I don't love the environmental effects but I think the carbon output of OpenAI is probably less than TikTok, and no one cares about that because they enjoy TikTok more. The energy issue is honestly a bigger problem than AI. And while I understand and appreciate people worried about throwing more weight on the scales, I'm not sure it's enough to really matter. I think we need bigger "what if" scenarios to handle that.

[–] november@lemmy.vg 70 points 1 day ago (15 children)

I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That's what I want.

[–] helloworld55@lemm.ee 0 points 10 hours ago

I agree with this sentiment but I don't see it actually convincing anyone of the dangers of AI. It reminds me a lot of how teachers said that calculators won't always be available and we need to learn how to do mental math. That didn't convince anyone then

[–] anomnom@sh.itjust.works 3 points 18 hours ago

Maybe if the actual costs—especially including environmental costs from its energy use—were included in each query, we’d start thinking for ourselves again. It’s not worth it for most things it’s used for at the moment

[–] givesomefucks@lemmy.world 20 points 1 day ago (1 children)

AI people always want to ignore the environmental damage as well...

Like all that electricity and water are just super abundant things humans have plenty of.

Everytime some idiot asks AI instead of googling it themselves the planet gets a little more fucked

load more comments (1 replies)
load more comments (12 replies)
[–] rockerface@lemm.ee 19 points 23 hours ago

Rename it to LLMs, because that's that it is. When the hype label is gone, it won't get shoved into everywhere for shits and giggles and be used for stuff it's actually useful for.

[–] BertramDitore@lemm.ee 55 points 1 day ago (12 children)

I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.

I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.

Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Every step of any deductive process needs to be citable and traceable.

[–] Maeve@kbin.earth 17 points 1 day ago (1 children)

Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Their creators can't even keep them from deliberately lying.

load more comments (1 replies)
load more comments (11 replies)
[–] justOnePersistentKbinPlease@fedia.io 36 points 1 day ago (14 children)

They have to pay for every copyrighted material used in the entire models whenever the AI is queried.

They are only allowed to use data that people opt into providing.

[–] a_wild_mimic_appears@lemmy.dbzer0.com 6 points 18 hours ago* (last edited 18 hours ago)

I would make a case for creation of datasets by a international institution like the UNESCO. The used data would be representative for world culture, and creation of the datasets would have to be sponsored by whoever wants to create models out of it, so that licencing fees can be paid to creators. If you wanted to make your mark on global culture, you would have an incentive to offer training data to UNESCO.

I know, that would be idealistic and fair to everyone. No way this would fly in our age.

load more comments (13 replies)
[–] banshee@lemmy.world 12 points 22 hours ago

I am largely concerned that the development and evolution of generative AI is driven by hype/consumer interests instead of academia. Companies will prioritize opportunities to profit from consumers enjoying the novelty and use the tech to increase vendor lock-in.

I would much rather see the field advanced by scientific and academic interests. Let's focus on solving problems that help everyone instead of temporarily boosting profit margins.

I believe this is similar to how CPU R&D changed course dramatically in the 90s due to the sudden popularity in PCs. We could have enjoyed 64 bit processors and SMT a decade earlier.

[–] endeavor@sopuli.xyz 8 points 21 hours ago* (last edited 21 hours ago)

More regulation, supervised development, laws limiting training data to be consensual.

[–] Sunsofold@lemmings.world 22 points 1 day ago (2 children)

Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›