Average user here thinks AI is synonymous with LLMs and that it's not only not intelligent but also bad for the environment, immoral to use because it's trained on copyrighted content, a total job-killer that's going to leave everyone unemployed, soulless slop that can't create real art or writing, and basically just a lazy cheat for people who lack actual talent or skills.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
And they're right about all of that except the AI equals LLMs thing, but that's forgivable because the LLM hustlers have managed to make the terms synonymous in most people's minds through a massive marketing effort.
I would say they are right in that what companies are currently selling as AI is mostly just LLM or machine learning. We don't have true intelligence. The separation is between what AI did mean in the past before the hype train tried to sell the current snake oil.
And that's a good thing.
It's not just that that's what the average person thinks, but that's really the only kind of AI they're likely to come in direct contact with or is the kind being applied to systems that are directly undermining their lives.
ML has been used for over a decade now in things like cyber security for behavioral analysis and EDR (Endpoint Detection and Response) systems. I've helped a friend use SLEAP, which analyzes specifically formatted videos of animals to catalog interactions over dozens of hours of footage instead of needing to manually scrub it. In these ways, the serious scientist/engineer does not care what the average person thinks of AI, it has no bearing on the functioning of these systems or the work they perform. The only people that care about the sentiment of the average person are the people that need to keep the hype train going for their product valuations to which I have nothing to say but a full-throated, "Fuck 'em"
Don’t forget the psychosis and other mental impact.
He would be true AI. I would shower him with love.
Just because some cock sucking finance bros call an LLM a AI, doesn’t make it an AI.
You're not saying "cock sucking" pejoratively, are you ?
I think the people who do that to me are pretty cool tbh
I don't use the term cocksucker myself, but I think fact that vulgarity already gives it negative connotation. Like, I didn't pat my wife on the head last night and call her my cute little cocksucker. I can imagine that could be sometime else's pillow talk, but that would leave me touch starved for a while.
I don't THINK calling a gay man a pussyfucker would have the same weight, but I don't have deep enough conversations with gay men to really know. I have heard that some men pride themselves in never having been with having never been with a woman, so maybe it would still hurt.
On the flip side, just calling someone a fucker can be enough to start a fight.
I'm not going to pretend that the poster meant to use the word as asshole, because cocksucker definitely hits different to male pride. I don't think I would use the word to hurt someone I was angry with, but who knows what might come out when emotions are high. I don't plan on using the word for fighting, but insulting someone could be enough to cause someone to attack riskily. If you don't practice what you say, then you might just repeat something you will regret.
To summarize, I hope the poster isn't a bigot, but when given the chance they appear to have doubled down. Guess you got your answer.
Atari chess opponent is AI
Age of Empires 2 AI is a cheatin' bitch.
If every cheating bitch is an AI, then so is the Dancer of the Boreal Valley.
The problem isn't that "everything is AI" - it's that people think AI means way more than it actually does.
That superintelligent sci-fi assistant you're picturing? That's called Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI). Both are subcategories of AI, but they're worlds apart from Large Language Models (LLMs). LLMs are intelligent in a narrow sense: they're good at one thing - churning out natural-sounding language - but they're not generally intelligent.
Every AGI is AI, but not every AI is AGI.
idk who that character is, but i don't like AI, it is polluting the environment and polluting the internet, all while disrespecting the work of artists (visual artists, musicians, voice actors, writers, photographers, etc)
opt-out is not consent
"Hes one of the good ones"
AI as a concept is great. It should 100% be used for scientific and medical research.
But modern AI is a tool of fascists that is destroying our environment and causing more harm than good to our society. Anyone who uses it unironically should be ashamed of themselves. It is absolutely killing people's ability to think.
--
For those confused by the pic, it's the Iron Giant. Fantastic movie from the 90s, and incredibly sad and nostalgia inducing. Definitely worth a watch.
But yes that's a clanker
One word sums up all that is wrong right now and it is greed.
AI has become synonymous with the worst of human nature hence it has become a loaded term.
Another way to look at this is it is not AI that is the problem we are or more specifically the people that will use AI to control us.
This technology is like the atomic bomb. We are fast racing towards a future were a few people will be able to dictate what everyone will be able to do. The person that controls AI and the computing power associated with it will control the world.This is intoxicating and it has drawn out the worst human beings who want to misuse this technology.
And it has already happened to some degree. Massive data centers, surveillance technology, and AI are being used to profile people and target them for death. In the future AI teachers will become the dominate form of teaching. AI will make our decision and we will be subject to a system without recourse or redressability.
Soon we will have a generation of people who only know what AI has told them. This is the kind of scenario that we have been warned against and the reason that those who dislike like propaganda and misinformation are so upset with where things are heading.
I'm a fan of the technology, I've been using it for various projects and I see a lot of potential. But there's widespread anti-AI sentiment on the Fediverse. I notice you're getting a lot of downvotes for merely asking about it.
Yeah same, at work and in my personal projects it's been a real Cambrian explosion. It's funny cause I don't vibe with AI maximalists but the people I do vibe with all hate this thing without really knowing it.
I strongly dislike how LLMs are inserted everywhere.
But LLMs are just one kind of AI and I'm not going to stop using the kinds that actually are useful and appropriate.
Only talentless losers make A.I. "art."
If it worked the way that it does in sci-fi I'd have no problem with it. If it could give us cures for cancer and reactionless drives everyone would be happy.
But it doesn't work like that and if they keep going along the lines of Large Language Models it'll never work like that. AI as it is right now is a barely functional toy that is being misused by virtually everyone and major businesses alike.
I am perfectly happy for AI research to continue but they need to be realistic about its capabilities and be honest about their valuations of companies. AI research should still be at the level of "in the lab", it is definitely not a product that should be commercially available yet.
wait, that caption on the iron giant is unironically really funny.
LLMs are fundamentally incapable of caring about what it produces and therefore incapable of making anything interesting. In the early days of LLMs' mainstream uses that issue was somewhat compensated for by randomness and jank, but the subsequent advancements in the technology have mainly made it's outputs as generic as possible. None of this has to do with the Iron Giant, as he is a fictional character.
My opinion of AI/LLMs aside, I think that even the joking use of a made-up slur against non-humans still legitimizes the general use of slurs (and many who use real slurs believe their targets are subhuman).
He's one of the good ones!!!
Wait...
I am not inherently against "AI". I am against LLM's because they are both an ecological disaster and a social disaster.
Naw, he a homie. He a real clanka.
If you examine closely, you'll see there is no AI, but Vin Diesel reading a script (written by humans).
At the moment, AI is just a glorified autocomplete and I think it does more harm than good. (For LLMs). Is it a useful tool? Definitely. Should it replace jobs? Hell no. Is it being used as an excuse for the current recession and layoffs caused by offshoring? Hell yes. Is it killing the internet and propagating fake news? Definitely
If we're talking about other applications (computer vision, image processing etc), then yes. I think think the surveillance states (face verification) and Ukraine-Russia war heavily uses these applications
Wow, you don't know who that is but you'd ask if we'd call him a clanker?
Wasn't he an alien?
A fascinating tool with a lot of potential, but sadly is being used right now to feed a greed machine.
AI is riding the surface of a monster bubble and anyone gleefully waiting for the pop has no idea what that's going to do the US economy, and then everyone elses.
All but 1% of US economic growth last year was AI development and speculation. Combine that with the US passing, for the first time, 200%+ on the Buffett Index and we are screwed.
For reference, the Buffett Index is total stock market valuation vs. GDP. There is better than twice the dollars in the stock market than we produce in a year. The index was around 130% in 1929 and 2008.
I'm biased because of the work my kid does in the field. It's paying his mortgage so... 😉
PERSONALLY, not a fan, I think it's a dangerous abrogation of personal responsibility... BUT...
I do think I found a legitimate creative use for it.
There's an AI powered app for a specific brand of guitar amplifier. If you want your guitar to sound like a particular artist or a particular song, you tell it via a natural language input and it does all the adjustments for you.
You STILL have to have the personal talent to, you know, PLAY the guitar, but it saves you hours of fiddling with dials and figuring out what effects and pedals to apply to get the sound you're looking for.
Video, same player, same guitar, same amp, multiple sounds:
this is cool tbh. i think thats what ai should be using for. not some ai slop
That's what I thought, it allows creatives to be creative.
Kind of like if you had an art program you could ask for "Give me a paint palette with the colors from Starry Night."
You still have to have the artistic talent to make use of them, it's not going to help you there, but it saves you hours of research and mixing.
Depends on the use case.
- AI that parses manuals, documentation and dumbs it down for me? Yes please.
- AI that generates images? Eh, kinda undecided. I have seen really impressive AI videos, which I couldn't tell from real videography.
- AI as a personal assistant? No, too much energy wasted on minor issues that could have been solved more efficiently. E.g: I have X at home, what could I make for dinner?
- AI (not LLMs) in medical/scientific fields. Very intruiging. Yes. good shit.
- AI in childrens toys. Eww. Burn it! Fecken burn it!!!
I think it can be a great tool, but it is overvalued atm, and there are AI images anywhere which is really frustrating. When I go to pinterest, I want to see human input. When I check my LinkedIn everyone and their cat is using AI graphics - I get it, it's quick and easy, but such a waste of energy.
If we want to keep using AI, we need to reduce the quantity in which we use it and it's resource consumption.
From a practical point, useful in starting a base on projects, but sucks on further progress. I've used it in coding and 3d motion, and in both my experience it was like that.
From an environmental point, it's an overengineered mess. Local models are satisfactory for most use cases, and we don't really need huge computing clusters dedicated for AI.
Not to his face.
I support running or using AI locally since it's privacy respecting and doesn't damage environment since the Local AI uses energy to function from the device which it's running on locally. But in terms of other AI, nope.
Very bullish long term. I think that I can with say pretty good confidence that it's possible to achieve human-level AI, and that doing so would be quite valuable. I think that this will very likely be very transformational, on the order of economic or social change that occurred when we moved from the primary sector of the economy being most of what society did to the secondary sector, or the secondary sector to the tertiary sector. Each of those changed the fundamental "limiting factor" on production, and produced great change in human society.
Hard to estimate which companies or efforts might do well, and the near term is a lot less certain.
In the past, we've had useful and successful technologies that we now use everyday that we've developed using machine learning. Think of optical character recognition (OCR) or the speech recognition that powers computer phone systems. But they've often taken some time to polish (some here may remember "egg freckles").
There are some companies promising the stars on time and with their particular product, but that's true of every technology.
I don't think that we're going to directly get an advanced AI by scaling up or tweaking LLMs, though maybe such a thing could internally make use of LLMs. The thing that made neural net stuff take off in the past few years and suddenly have a lot of interesting applications wasn't really fundamental research breakthroughs on the software side. It was scaling up on hardware what we'd done in the past.
I think that generative AI can produce things of real value now, and people will, no doubt, continue R&D on ways to do interesting things with it. I think that the real impact here is not so much technically interesting as it is economic. We got a lot of applications in a short period of time and we are putting the infrastructure in place now to use more-advanced systems in place of them.
I generally think that the output of pure LLMs or diffusion models is more interesting when it comes to producing human-consumed output like images. We are tolerant of a lot of errors, just need to have our brains cued with approcimately the right thing. I'm more skeptical about using LLMs to author computer software, code
I think that the real problems there are going to need AGI and a deeper understanding of the world and thinking process to really automate reasonably. I understand why people want to automate it now
software that can code better software might be a powerful positive feedback loop
but I'm dubious that it's going to be a massive win there, not without more R&D producing more-sophisticated forms of AI.
On "limited AI", I'm interested to see what will happen with models that can translate to and work with 3D models of the world rather than 2D. I think that that might open a lot of doors, and I don't think that the technical hump to getting there is likely all that large.
I think that generative AI speech synth is really neat
the quality relative to level of effort to do a voice is already quite good. I think that one thing we're going to need to see is some kind of annotated markup that includes things like emotional inflection, accent, etc...but we don't have a massive existing training corpus of that the way we do plain text.
Some of the big questions I have on generative AI:
-
Will we be able to do sparser, MoE-oriented models that have few interconnections among themselves? If so, that might radically change what hardware is required. Instead of needing highly-specialized AI-oriented hardware from Nvidia, maybe a set of smaller GPUs might work.
-
Can we radically improve training time? Right now, the models that people use are trained using a lot of time running comoute-expensive backpropation, and we get a "snapshot" of that that doesn't really change. The human brain is in part a neural net, but it is much better at learning new things at low computational cost. Can we radically improve here? My guess is yes.
-
Can we radically improve inference efficiency? My guess is yes, that we probably have very, very inefficient use of computational capacity today relative to a human. Nvidia hardware runs at a gigahertz clock, the human brain at about 90 Hz.
-
Can we radically improve inference efficiency by using functions in the neural net other than a sum-of-products, which I believe is what current hardware is using? CPU-based neural nets used to tend to use a sigmoid activation function. I don't know if the GPU-based ones of today are doing so, haven't read up on the details. If not, I assume that they will be. But point is that introducing that was a win for neural net efficiency. Having access to that function improves efficiency in how many neurons are required to reasonably model a lot of things we'd like to do, like if you want to approximate a Boolean function. Maybe we can use a number of different functions and tie those to neurons in the neural net rather than having to approximate all of those via the same function. For example, a computer already has silicon to do integer arithmetic efficiently. Can we provide direct access to that hardware, and, using general techniques, train a neural net to incorporate that hardware where doing so is efficient? Learn to use the arithmetic unit to, say, solve arithmetic problems like "What is 1+1?" Or, more interestingly, do so for all other problems that make use of arithmetic?
He is what he chooses to be.