this post was submitted on 07 Jul 2025
812 points (98.3% liked)

Technology

72499 readers
3276 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TheGrandNagus@lemmy.world 119 points 1 day ago* (last edited 10 hours ago) (31 children)

LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn't be used for most things that are not serious either.

It's a shame that by applying the same "AI" naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.

For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.

Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.

My friend is involved in making a mod for a Fallout 4, and there was an outreach for people recording voice lines - she says that there are some recordings of dubious quality that would've been unusable before that can now be used without issue thanks to AI denoising algorithms. That is genuinely useful!

As is things like pattern/image analysis which appears very promising in medical analysis.

All of these get branded as "AI". A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of "AI" tech, because they've learned not to trust anything branded as AI, due to being let down by LLMs.

[–] punkwalrus@lemmy.world 0 points 1 day ago (3 children)

I'd compare LLMs to a junior executive. Probably gets the basic stuff right, but check and verify for anything important or complicated. Break tasks down into easier steps.

[–] zbyte64@awful.systems 2 points 8 hours ago* (last edited 8 hours ago) (1 children)

A junior developer actually learns from doing the job, an LLM only learns when they update the training corpus and develop an updated model.

[–] jumping_redditor@sh.itjust.works -1 points 4 hours ago (1 children)

an llm costs less, and won't compain when yelled at

[–] zbyte64@awful.systems 1 points 3 hours ago

Why would you ever yell at an employee unless you're bad at managing people? And you think you can manage an LLM better because it doesn't complain when you're obviously wrong?

load more comments (1 replies)
load more comments (28 replies)