this post was submitted on 10 Apr 2025
220 points (85.7% liked)
memes
14160 readers
3005 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to [email protected]
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- [email protected] : Star Trek memes, chat and shitposts
- [email protected] : Lemmy Shitposts, anything and everything goes.
- [email protected] : Linux themed memes
- [email protected] : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI is fantastic.
It's what it's being used for that fucking sucks.
I love reading about shit like "LLMs help scientists develop new pattern-seeking behaviors and theories" and shit. Fucking hate when I see AI art or places trying to 'streamline' their processes with half-assed AI assistance.
This. I was a phd seeking cybersecurity researcher leaning heavily into AI up until last year, and it bothered me to no end that some of the most promising technology I have ever seen was being primarily used to enhance the police state or increase BP profits by a few %. AI is literally a step towards a utopian post scarcity future, but instead of being used that way it was immediately weaponized against the working class for the benefit of the parasite class.
The mad "gold rush" mentality towards AGI is nerve-racking. I'm reminded of Protogen's attitude towards the Protomolecule in The Expanse.
I figured we still had 5-ish years to figure it out, but the rapid progress against HLE (Humanity's Last Exam) makes me nervous.
But sure, let's just rush headlong towards the precipice, how hard can alignment be really? My anxiety about the future and the importance of getting this right are not eased by people scoffing because "just count the fingers!" When the field is changing so fast, looking at what was going on a few years ago isn't helpful.
Well, past my pay grade.
I don't understand what this has to do with a PhD in cyber security but I do agree with what you said though
It gives people context for what kind of ai math I'm familiar with/formed my opinions about ai on (ie, generally lightweight transformer models rather than LLMs), as well as a small logos appeal of "hey I spent years of my life researching that shit, I at least kinda know what I'm talking about"
AI as a concept is amazing, and some applications it’s being used for are equally amazing. It’s the mainstream AI drivel that I fucking hate.
But there’s also these things that put me off:
The idea of being able to run smaller models locally is amazing and everyone should play around with them. I find it to be fun for toy apps and experimenting, but I’ve yet to see a single good use case from the multitude of companies using it (with the exception of cases like you mentioned).
I've had good experiences with perplexity as an AI tool that hallucinates a lot less. It's basically a search engine that them feeds the information into a model for interpretation, works pretty well!
I should check this out. I think it’s great as a companion tool to coding, but Copilot has been so hit or miss for me.
In my experience, copilot is one of the weakest tools right now. I want to like it, as I have a license at my job, but it's really hit or miss. Perplexity hasn't steered me wrong yet!
More accurate meme:
AI is amazing
AI generated images and text are not
I literally use LLMs every day at work to help me code, and yes they are great, even for senior engineers who know what they're doing, it's like using intellisense or something like resharper on steroids.
Copilot Web, which is just combining Bing's substandard search engine with LLMs, has made it genuinely more useful and accurate than Google.
Capitalism, wildly uneven distribution of societal resources, and exploitation all suck, but what LLMs can do on a technical level is pretty wild and would be universally praised if it weren't for the job loss implications.
AI text and image generation can do awesome things. It doesnt matter because they can also do way worse things. Spam is one example
The same argument can be made about computers or the internet or government or schools or speeches or...
If it does more harm than good, why support it?
The argument you presented in your last comment wasnt 'whether it does more harm than good', but 'whether it can do more harm than good'.
If you want to talk about whether LLMs actually do more harm than good in the present world, then I would challenge you to name an ill effect that's the result of LLMs and not the result of capitalism.
Technology, be it physical, or computer based, has been automating people out of jobs literally since jump. You can either vainly fight technical progress or you can fight for a system that shares the rewards from that progress.
I wasnt clear enough in the first comment
It would be hard to name a bad thing that cant be linked to capitalism. People using AI to claim they did something impressive when they didnt could be an example
Yes, so then maybe the problem is with capitalism, not with new technology.
This is a real "everywhere I poke hurts" ... "Yeah, cause your finger is broken", situation.
It's not either. Those things companies are trying to sell you out there and CEOs are ordering you to use unambiguously suck.
Yep. Automation, machine learning etc should be used to get rid of bullshit jobs so that people have more time to invest in art and stuff. Instead, "AI" is used to get rid of artists so more people work bullshit jobs.
But at the same time, there are some great fucking uses for machine learning. For example, my father's an anesthesia nurse and told me, at his hospital doctors use it to analyze imaging results like MRI and CT. A technician controls the results but a trained tech needs so much less time for that than to analyze the images AND the machine analysis actually misses fewer details and are more precise than humans.
But "AI art" is still cancer.
LLMentalist strikes again
Same with crypto. Both have potential, but are being misused instead.
What do you use it for?
I personally use it to make art for my FOSS game I make as a hobby.
Cool. Do you have a link?
Ofc! https://dbzer0.itch.io/hypnagonia
Looks like the video on the homepage is broken? But looks awesome!
It's some itch nonsense with different browsers. Try another browser but the video is not so important
Me, personally? Nothing. Prefer to go without AI.
Not OP that you asked, but I've used ai before to examine netflow data at the head of a medium sized network and identify malicious traffic via netflow anomaly, rather than the signature based methods that are used by current network intrusion detection systems. It's effectiveness is contingent on having good data that contains labeled malicious packets to train on, but it was pretty dope in lab conditions to watch a graduate ethical hacking class try to compromise my testbed network and my best performing ai powered intrusion detection algorithms were able to accurately flag something like 90% of the malicious traffic.
If we had an organization dedicated to creating like a modern version of the NSL-KDD dataset every 6 months or so I think this type of network intrusion detection system would be extremely effective.