this post was submitted on 10 Apr 2025
219 points (85.7% liked)

memes

14140 readers
3437 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to [email protected]

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
top 35 comments
sorted by: hot top controversial new old
[–] [email protected] 86 points 2 days ago (7 children)

AI is fantastic.

It's what it's being used for that fucking sucks.

I love reading about shit like "LLMs help scientists develop new pattern-seeking behaviors and theories" and shit. Fucking hate when I see AI art or places trying to 'streamline' their processes with half-assed AI assistance.

[–] [email protected] 22 points 2 days ago (2 children)

This. I was a phd seeking cybersecurity researcher leaning heavily into AI up until last year, and it bothered me to no end that some of the most promising technology I have ever seen was being primarily used to enhance the police state or increase BP profits by a few %. AI is literally a step towards a utopian post scarcity future, but instead of being used that way it was immediately weaponized against the working class for the benefit of the parasite class.

[–] [email protected] 1 points 1 day ago (1 children)

I don't understand what this has to do with a PhD in cyber security but I do agree with what you said though

[–] [email protected] 2 points 1 day ago

It gives people context for what kind of ai math I'm familiar with/formed my opinions about ai on (ie, generally lightweight transformer models rather than LLMs), as well as a small logos appeal of "hey I spent years of my life researching that shit, I at least kinda know what I'm talking about"

[–] [email protected] 3 points 1 day ago

The mad "gold rush" mentality towards AGI is nerve-racking. I'm reminded of Protogen's attitude towards the Protomolecule in The Expanse.

I figured we still had 5-ish years to figure it out, but the rapid progress against HLE (Humanity's Last Exam) makes me nervous.

But sure, let's just rush headlong towards the precipice, how hard can alignment be really? My anxiety about the future and the importance of getting this right are not eased by people scoffing because "just count the fingers!" When the field is changing so fast, looking at what was going on a few years ago isn't helpful.

Well, past my pay grade.

[–] [email protected] 24 points 2 days ago (1 children)

AI as a concept is amazing, and some applications it’s being used for are equally amazing. It’s the mainstream AI drivel that I fucking hate.

But there’s also these things that put me off:

  • the glaring lack of ethics behind the companies pushing it (Meta and Open AI for example) which is more a capitalism problem than anything
  • the fact that it was built on plagiarism without consulting with artists and authors (who probably would’ve been open to the idea if it was presented with a level playing field)
  • the fact that it always hallucinates (I can’t get it to stop making up arbitrary bullshit no matter what I do)
  • the resources required and the stress it places on power grids and the environment (puts it out of reach for most end users since it requires a killer rig)
  • the massive shortage of GPUs and the huge price hike (which we can also thank crypto for)

The idea of being able to run smaller models locally is amazing and everyone should play around with them. I find it to be fun for toy apps and experimenting, but I’ve yet to see a single good use case from the multitude of companies using it (with the exception of cases like you mentioned).

[–] [email protected] 1 points 2 days ago (1 children)

I've had good experiences with perplexity as an AI tool that hallucinates a lot less. It's basically a search engine that them feeds the information into a model for interpretation, works pretty well!

[–] [email protected] 1 points 2 days ago (1 children)

I should check this out. I think it’s great as a companion tool to coding, but Copilot has been so hit or miss for me.

[–] [email protected] 2 points 2 days ago

In my experience, copilot is one of the weakest tools right now. I want to like it, as I have a license at my job, but it's really hit or miss. Perplexity hasn't steered me wrong yet!

[–] [email protected] 13 points 2 days ago (2 children)
[–] [email protected] 3 points 1 day ago (1 children)

AI is amazing

AI generated images and text are not

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

I literally use LLMs every day at work to help me code, and yes they are great, even for senior engineers who know what they're doing, it's like using intellisense or something like resharper on steroids.

Copilot Web, which is just combining Bing's substandard search engine with LLMs, has made it genuinely more useful and accurate than Google.

Capitalism, wildly uneven distribution of societal resources, and exploitation all suck, but what LLMs can do on a technical level is pretty wild and would be universally praised if it weren't for the job loss implications.

[–] [email protected] 1 points 1 day ago (1 children)

AI text and image generation can do awesome things. It doesnt matter because they can also do way worse things. Spam is one example

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

The same argument can be made about computers or the internet or government or schools or speeches or...

[–] [email protected] 1 points 1 day ago (1 children)

If it does more harm than good, why support it?

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

The argument you presented in your last comment wasnt 'whether it does more harm than good', but 'whether it can do more harm than good'.

If you want to talk about whether LLMs actually do more harm than good in the present world, then I would challenge you to name an ill effect that's the result of LLMs and not the result of capitalism.

Technology, be it physical, or computer based, has been automating people out of jobs literally since jump. You can either vainly fight technical progress or you can fight for a system that shares the rewards from that progress.

[–] [email protected] 1 points 1 day ago (1 children)

I wasnt clear enough in the first comment

It would be hard to name a bad thing that cant be linked to capitalism. People using AI to claim they did something impressive when they didnt could be an example

[–] [email protected] 1 points 1 day ago

It would be hard to name a bad thing that cant be linked to capitalism.

Yes, so then maybe the problem is with capitalism, not with new technology.

This is a real "everywhere I poke hurts" ... "Yeah, cause your finger is broken", situation.

[–] [email protected] -3 points 2 days ago

It's not either. Those things companies are trying to sell you out there and CEOs are ordering you to use unambiguously suck.

[–] [email protected] 8 points 2 days ago

Yep. Automation, machine learning etc should be used to get rid of bullshit jobs so that people have more time to invest in art and stuff. Instead, "AI" is used to get rid of artists so more people work bullshit jobs.

But at the same time, there are some great fucking uses for machine learning. For example, my father's an anesthesia nurse and told me, at his hospital doctors use it to analyze imaging results like MRI and CT. A technician controls the results but a trained tech needs so much less time for that than to analyze the images AND the machine analysis actually misses fewer details and are more precise than humans.

But "AI art" is still cancer.

[–] [email protected] 2 points 2 days ago

Same with crypto. Both have potential, but are being misused instead.

[–] [email protected] 2 points 2 days ago (3 children)
[–] [email protected] 3 points 1 day ago (1 children)

I personally use it to make art for my FOSS game I make as a hobby.

[–] [email protected] 1 points 1 day ago (1 children)
[–] [email protected] 2 points 1 day ago (1 children)
[–] [email protected] 1 points 1 day ago (1 children)

Looks like the video on the homepage is broken? But looks awesome!

[–] [email protected] 3 points 1 day ago

It's some itch nonsense with different browsers. Try another browser but the video is not so important

[–] [email protected] 6 points 2 days ago

Me, personally? Nothing. Prefer to go without AI.

[–] [email protected] 2 points 2 days ago

Not OP that you asked, but I've used ai before to examine netflow data at the head of a medium sized network and identify malicious traffic via netflow anomaly, rather than the signature based methods that are used by current network intrusion detection systems. It's effectiveness is contingent on having good data that contains labeled malicious packets to train on, but it was pretty dope in lab conditions to watch a graduate ethical hacking class try to compromise my testbed network and my best performing ai powered intrusion detection algorithms were able to accurately flag something like 90% of the malicious traffic.

If we had an organization dedicated to creating like a modern version of the NSL-KDD dataset every 6 months or so I think this type of network intrusion detection system would be extremely effective.

[–] [email protected] 7 points 1 day ago

I like the steak sauce