this post was submitted on 18 May 2025
185 points (93.8% liked)

Ask Lemmy

31729 readers
1960 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 3) 50 comments
sorted by: hot top controversial new old
[–] Gradually_Adjusting@lemmy.world 18 points 1 day ago (1 children)

Part of what makes me so annoyed is that there's no realistic scenario I can think of that would feel like a good outcome.

Emphasis on realistic, before anyone describes some insane turn of events.

[–] venusaur@lemmy.world 3 points 21 hours ago (1 children)

Some jobs are automated and prices go down. That’s realistic enough. To be fair there’s good and bad likely in that scenario. So tack on some level of UBI. Still realistic? That’d be pretty good.

[–] Gradually_Adjusting@lemmy.world 1 points 21 hours ago

I'm afraid I can only give partial credit, my grading rubric required a mention of "purchasing power".

[–] subignition@fedia.io 16 points 1 day ago

Training data needs to be 100% traceable and licensed appropriately.

Energy usage involved in training and running the model needs to be 100% traceable and some minimum % of renewable (if not 100%).

Any model whose training includes data in the public domain should itself become public domain.

And while we're at it we should look into deliberately taking more time at lower clock speeds to try to reduce or eliminate the water usage gone to cooling these facilities.

[–] JTskulk@lemmy.world 5 points 1 day ago

2 chicks at the same time.

[–] HakFoo@lemmy.sdf.org 12 points 1 day ago

Stop selling it a loss.

When each ugly picture costs $1.75, and every needless summary or expansion costs 59 cents, nobody's going to want it.

[–] njm1314@lemmy.world 5 points 1 day ago

Just Mass public hangings of tech Bros.

[–] Chadus_Maximus@lemm.ee 2 points 20 hours ago

Most importantly, I wish countries would start giving a damn about the extreme power consumption caused by AI and regulate the hell out of it. Why do we need to lower our monitors refresh rate while there is a ton of energy used by useless AI agents instead that we should get rid of?

[–] Zwuzelmaus@feddit.org 5 points 1 day ago

I want lawmakers to require proof that an AI is adhering to all laws. Putting the burden of proof on the AI makers and users. And to require possibilities to analyze all AI's actions regarding this question in court cases.

This would hopefully lead to the devopment of better AI's that are more transparent, and that are able to adhere to laws at all, because the current ones lack this ability.

[–] Muaddib@sopuli.xyz 2 points 20 hours ago

Ban it until the hard problem of consciousness is solved.

[–] chonkyninja@lemmy.world 8 points 1 day ago (3 children)

I’d like for it to be forgotten, because it’s not AI.

[–] naught101@lemmy.world 7 points 1 day ago

It's AI in so far as any ML is AI.

[–] SerotoninSwells@lemmy.world 4 points 1 day ago

Thank you.

It has to come from the C suite to be "AI". Otherwise it's just sparkling ML.

load more comments (1 replies)
[–] Ludrol@szmer.info 1 points 18 hours ago

A breakthrough in AI Alignment reaserch.

[–] awesomesauce309@midwest.social 10 points 1 day ago (2 children)

I’m not anti AI, but I wish the people who are would describe what they are upset about a bit more eloquently, and decipherable. The environmental impact I completely agree with. Making every google search run a half cooked beta LLM isn’t the best use of the worlds resources. But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument. It comes across as childish to me

[–] HakFoo@lemmy.sdf.org 7 points 1 day ago (2 children)

It feels like we're being delivered the sort of stuff we'd consider flim-flam if a human did it, but lapping it up bevause the machine did it.

"Sure, boss, let me write this code (wrong) or outline this article (in a way that loses key meaning)!" If you hired a human who acted like that, we'd have them on an improvement plan in days and sacked in weeks.

load more comments (2 replies)
[–] MoogleMaestro@lemmy.zip 7 points 1 day ago* (last edited 1 day ago)

But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument.

The math around it is unimportant, frankly. The issue with AI isn't about GANN networks alone, it's about the licensing of the materials used to train a GANN and whether or not companies that used materials to train a GANN had proper ownership rights. Again, like the post I made, there's an easy argument to make that OpenAI and others never licensed the material they used to train the AI, making the whole model poisoned by copyright theft.

There's plenty of uses of GANNs that are not problematic. Bespoke solution for predicting the outcomes of certain equations or data science uses that involve rough predictions on publically sourced statistics (or privately owned.) The problem is that these are not the same uses that we call "AI" today -- and we're actually sleeping on much better uses of neural networks by focusing on a pie in the sky AGI nonsense being pushed by companies that are simply pushing highly malicious, copyright infringing products to make a quick buck on the stock market.

[–] spankmonkey@lemmy.world 10 points 1 day ago (1 children)

I want all of the CEOs and executives that are forcing shitty AI into everything to get pancreatic cancer and die painfully in a short period of time.

Then I want all AI that is offered commercially or in commercial products to be required to verify their training data and be severely punished for misusing private and personal data. Copyright violations need to be punished severely, and using copyrighted works being used for AI training counts.

AI needs to be limited to optional products trained with properly sourced data if it is going to be used commercially. Individual implementations and use for science is perfectly fine as long as the source data is either in the public domain or from an ethically collected data set.

load more comments (1 replies)
[–] some_guy@lemmy.sdf.org 9 points 1 day ago (1 children)

I want OpenAI to collapse.

load more comments (1 replies)
[–] audaxdreik@pawb.social 9 points 1 day ago (1 children)

If we're talking realm of pure fantasy: destroy it.

I want you to understand this is not AI sentiment as a whole, I understand why the idea is appealing, how it could be useful, and in some ways may seem inevitable.

But a lot of sci-fi doesn't really address the run up to AI, in fact a lot of it just kind of assumes there'll be an awakening one day. What we have right now is an unholy, squawking abomination that has been marketed to nefarious ends and never should have been trusted as far as it has. Think real hard about how corporations are pushing the development and not academia.

Put it out of its misery.

[–] MudMan@fedia.io 8 points 1 day ago (1 children)

How do you "destroy it"? I mean, you can download an open source model to your computer right now in like five minutes. It's not Skynet, you can't just physically blow it up.

[–] Jeffool@lemmy.world 8 points 1 day ago

OP asked what people wanted to happen, and even later "destroy gen AI" as an option. I get it is not realistically feasible, but it's certainly within the realm of options provided for the discussion. No need to police their pie in the sky dream. I'm sure they realize it's not realistic.

[–] TimLovesTech@badatbeing.social 8 points 1 day ago (1 children)

I think the AI that helps us find/diagnose/treat diseases is great, and the model should be open to all in the medical field (open to everyone I feel would be easily abused by scammers and cause a lot of unnecessary harm - essentially if you can't validate what it finds you shouldn't be using it).

I'm not a fan of these next gen IRC chat bots that have companies hammering sites all over the web to siphon up data it shouldn't be allowed to. And then pushing these boys into EVERYTHING! And like I saw a few mention, if their bots have been trained on unauthorized data sets they should be forced to open source their models for the good of the people (since that is the BS reason openai has been bending and breaking the rules).

load more comments (1 replies)
[–] DomeGuy@lemmy.world 7 points 1 day ago

Honestly, at this point I'd settle for just "AI cannot be bundled with anything else."

Neither my cell phone nor TV nor thermostat should ever have a built-in LLM "feature" that sends data to an unknown black box on somebody else's server.

(I'm all down for killing with fire and debt any model built on stolen inputs,.too. OpenAI should be put in a hole so deep that they're neighbors with Napster.)

[–] deadbeef@lemmy.nz 7 points 1 day ago (3 children)

AI models produced from copyrighted training data should need a license from the copyright holder to train using their data. This means most of the wild west land grab that is going on will not be legal. In general I'm not a huge fan of the current state of copyright at all, but that would put it on an even business footing with everything else.

I've got no idea how to fix the screeds of slop that is polluting search of all kinds now. These sorts of problems ( along the lines of email spam ) seem to be absurdly hard to fix outside of walled gardens.

[–] MudMan@fedia.io 10 points 1 day ago (3 children)

See, I'm troubled by that one because it sounds good on paper, but in practice that means that Google and Meta, who can certainly build licenses into their EULAs trivially, would become the only government-sanctioned entities who can train AI. Established corpos were actively lobbying for similar measures early on.

And of course good luck getting China to give a crap, which in that scenario would be a better outcome, maybe.

Like you, I think copyright is broken past all functionality at this point. I would very much welcome an entire reconceptualization of it to support not just specific AI regulation but regulation of big data, fair use and user generated content. We need a completely different framework at this point.

load more comments (3 replies)
load more comments (2 replies)
[–] Levitator2478@lemmy.ca 6 points 1 day ago* (last edited 1 day ago) (1 children)

My biggest issue with AI is that I think it's going to allow a massive wealth transfer from laborers to capital owners.

I think AI will allow many jobs to become easier and more productive, and even eliminate some jobs. I don't think this is a bad thing - that's what technology is. It should be a good thing, in fact, because it will increase the overall productivity of society. The problem is generally when you have a situation where new technology increases worker productivity, most of the benefits of that go to capital owners rather than said workers, even when their work contributed to the technological improvements either directly or indirectly.

What's worse, in the case of AI specifically it's functionality relies on it being trained on enormous amounts of content that was not produced by the owners of the AI. AI companies are in a sense harvesting society's collective knowledge for free to sell it back to us.

IMO AI development should continue, but be owned collectively and developed in a way that genuinely benefits society. Not sure exactly what that would look like. Maybe a sort of light universal basic income where all citizens own stock in publicly run companies that provide AI and receive dividends. Or profits are used for social services. Or maybe it provides AI services for free but is publicly run and fulfills prosocial goals. But I definitely don't think it's something that should be primarily driven by private, for-profit companies.

load more comments (1 replies)
[–] DonPiano@feddit.org 6 points 1 day ago

Firings and jail time.

In lieu of that, high fines and firings.

[–] Rhaedas@fedia.io 5 points 1 day ago

I think Meta and others went open with their models as firewall protection against legal action due to their blatant stealing of people's work to train with. If the models has stayed commercial and controlled within the company, they could be (probably still wouldn't be, but could be) forced to shut down or start over properly. But it's far too late now since it's everywhere there is a GPU running, even if models don't progress past current state.

That being said, not much is getting done about the safety factors. Yes, they are only LLMs and not AGI, but there's commonality in regards to not being sure what's going on inside the box and if it's really doing what it's told to do. Now is the time boundaries and research should be done, because once something happens (LLM or AGI) it's too late. So what do I want to see happen? Heavy regulation and transparency on the leading edge of development. And stop the madness of more compute being the only solution with its environmental effects. It might be the only solution, but companies are going that way because it's the easiest way to throw money at a problem and reap profits, which is all they care about.

[–] MoogleMaestro@lemmy.zip 5 points 1 day ago* (last edited 1 day ago)

What I want from AI companies is really simple.

We have a thing called intellectual property in the United States of America. If I decided to make a Jellyfin instance that I charged access to, containing material I didn't own, somehow advertising this service on the stock market as a publicly traded company, you would bet your ass that I'd have a 1 way ticket to a defense seat in court.

AI companies, otherwise, operate entirely on data they don't own and don't pay licensing for ANY of the materials that are used to train their neural networks. So, in their eyes, any image, video (tv show/movie) or book that happens to be posted on the Internet is fair game in their eyes. This isn't how intellectual property works for individuals, so why exactly would a publicly traded company have an exception to this rule?

I work a lot in the world of FOSS and have a firm understanding that just because code is there doesn't make it yours. This is why we have the GPL for licensing. In fact, I'll take it a step further and say that the entirety of AI is one giant licensing nightmare, especially coding AI that isn't actually attributing license details with the code they're sampling from. (Sampling code being notably different than, say, learning from. Learning implies self-agency, and not corporate ownership.)

It feels to me that the AI bubble has largely been about pushing AI so hard and fast that people were investing in something with a dubious legal state in the US. Nobody stopped to ask whether or not the data that Facebook had on their website (for example, they aren't alone in this) was actually theirs to own, and what the repercussions for these types of decisions are.

You'll also note that Tech and Social Media companies are quick to take ownership of data when it benefits them (artists works, intellectual property that isn't theirs, random user posts about topics) and quick to deny ownership when it becomes legally burdensome (CSAM, illicit drug deals, etc.) to a degree that no individual would be granted. Hell, I'm not even sure a "small" tech startup would be granted this level of double-speak and hypocrisy.

With this in mind, I am simply asking that AI companies pay for the data that they're using to train AI. Additionally, laws must be in place that allows for the auditing of all materials used to train an AI with the legal intent of verifying that all parties are paid accordingly. This is how every other business works. If this were somehow granted an exception, wouldn't it be braindead easy to run every "service" through an AI layer in order to bypass any and all copyright laws?

Otherwise, if facebook and others want to claim that data hosted on their website is theirs to own and train off of -- well, great, but there should be no exceptions to this and they should not be allowed to host materials they then have no ownership over. So pictures of IP they don't own or materials they want to claim they have no ownership over must be removed from the platform. I would much prefer the first of these two options, however.

edit: I should note, that AI for educational purposes could be granted an exception for this under fair use (for university) but would still also be required to site all sources used to produce the works in question (which is normal for academics, in the first place.) and would also come with some strict stipulations on using this AI as a "product" (it would basically be moot, much like some research papers). This basically the furthest I'm willing to give these companies.

[–] GregorGizeh@lemmy.zip 5 points 1 day ago

Wishful thinking? Models trained on illegal data get confiscated, the companies dissolved, the ceos and board members made liable for the damages.

Then a reframing of these bs devices from ai to what they actually do: brew up statistical probability amalgamations of their training data, and then use them accordingly. They arent worthless or useless, they are just being shoved into functions they cannot perform in the name of cost cutting.

[–] Tahl_eN@lemmy.world 4 points 1 day ago (1 children)

I'm not super bothered by Tue copyright issue - the copyright system is barely serving people these days anyway. Blow it up.

I'm deeply troubled by the obscene power use. It might be worth it if it was a good tool. But it's not.

I haven't gone out of my way to use AI anything, but it's been stuffed into everything. And it's truly bad at it's job. AI is like a precocious 8-year-old, butting into every conversation. And it gives the right answer at about the rate a ln 8-year-old does. When I do a web search, I then need to do another one to check the AI's answer. Or scroll down a page to get past the AI answers to real sources. When someone uses it to summarize a meeting, I then need to read through that summary to make sure the notes are accurate. And it doesn't know to ask when it doesn't understand something like a proper secretary would. When I go looking for reference images, I have to check to make sure they're real and not hallucinations.

It gets in my way and slows me down. It needed at least another decade of development before being deployed at all, never mind at the scale it has, and it needs to be opt-in, not crammed into everything. And until it can be relied on, it shouldn't be allowed to suck down as much electricity as it does.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›