this post was submitted on 17 Nov 2025
75 points (95.2% liked)

Technology

76945 readers
3213 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

One of the inventors of Siri, the original AI agent, wants you to "handle with care" when it comes to artificial intelligence. But are we becoming too cautious around AI in Europe and risking our future?

you are viewing a single comment's thread
view the rest of the comments
[–] Catoblepas@piefed.blahaj.zone 19 points 3 days ago (1 children)

Hat on top of a hat technology. The underlying problems with LLMs remain unchanged, and “agentic AI” is basically a marketing term to make people think those problems are solved. I realize you probably know this, I’m just kvetching.

[–] Auth@lemmy.world 0 points 3 days ago (3 children)

Not really. By breaking down the problem you can adjust the models to the task. There is a lot of work going into this stuff and there are ways to turn down the randomness to get more consistent outputs for simple tasks.

[–] pinball_wizard@lemmy.zip 5 points 1 day ago* (last edited 1 day ago)

there are ways to turn down the randomness to get more consistent outputs for simple tasks.

Yes: shell scripting, which we have had for half a century.

But the term "Agentic AI" sells better than "shell scripting with extra steps and shittier results."

[–] MangoCats@feddit.it 8 points 3 days ago (2 children)

turn down the randomness to get more consistent outputs for simple tasks.

This is a tricky one... if you can define good success/failure criteria, then the randomness coupled with an accurate measure of success, is how "AI" like Alpha Go learns to win games, really really well.

In using AI to build computer programs and systems, if you have good tests for what "success" looks like, you'd rather have a fair amount of randomness in the algorithms trying to make things work because when they don't and they fail, they end up stuck, out of ideas.

[–] KairuByte@lemmy.dbzer0.com 3 points 1 day ago (2 children)

To play devils advocate, agentic things wouldn’t necessarily include software development. “Hey siri create me an e-commerce site” isn’t likely to happen for a long while, because like you said it’s a complex thing that doesn’t have clear success measures. But “hey siri get me a restaurant reservation at place, hire a taxi for me to get there, and let Brad know the details” can be broken down into a number of different “simple” things that have simple to define measures of success. Did a reservation get booked? Did we tell Brad the details? etc.

[–] MangoCats@feddit.it 1 points 1 day ago

“Hey siri create me an e-commerce site”

You should try it. If your e-commerce site is simple with a lot of similar examples out in the wild to point at, I believe the latest agents actually can do such a thing. You'll just have to give them access to your financial account details so the site can process payments to you, you understand? While that's a joke, it's also true. You need to be able to check what the AI has done to be sure it's doing what you want.

[–] pinball_wizard@lemmy.zip 1 points 1 day ago (1 children)

“Hey siri create me an e-commerce site” isn’t likely to happen for a long while, because like you said it’s a complex thing that doesn’t have clear success measures.

One would hope so, but of course Someone is trying it, and it has gone as poorly as you might imagine.

[–] KairuByte@lemmy.dbzer0.com 1 points 1 day ago

Yes, but my point is that it’s a completely separate problem. Think of agentics like powershell applets. They generally only do one thing, but you can chain them together to achieve a larger goal.

You’re complaining about single applet, or a specific type of applet, while the topic is applets in general.

[–] pinball_wizard@lemmy.zip 1 points 1 day ago (1 children)

Yes. You've shared the use case where Agentic AI makes sense.

Basically, if I need more randomness than a shell script can supply, it makes sense to mix a learning model in.

The use case I think we will continue to see significant use in is (low quality) advertising in contexts where only the product matters (not the brand). The cost for failure is lower, and the reward for creativity is higher.

Even in that nearly ideal use case, many companies leveraging it are going to discover that their brand image cannot afford to be associated with sociopathic AI slop. So I think even that trend is about to peak and reduce.

[–] MangoCats@feddit.it 1 points 1 day ago

I started working with AI in earnest a few weeks ago, I find myself constantly making the distinction between "deterministic" processes and AI driven things. What I'm mostly focused on is using AI to develop reliable deterministic processes (shell scripts, and more complex things) - because while it's really super cool that I can ask an AI agent to "do a thing" and it just does what I want without being told all the details, it's really super un-cool that the tenth time I ask it to do a very similar, even identical, thing it gets it wrong - sometimes horribly wrong: archive these files, oops I accidentally irretrievably deleted them.

[–] floquant@lemmy.dbzer0.com 6 points 3 days ago (1 children)

You're both right imo. LLMs and every subsequent improvement are fundamentally ruined by marketing heads like oh so many things in the history of computing, so even if agentic AI is actually an improvement, it doesn't matter because everyone is using it to do stupid fucking things.

[–] Auth@lemmy.world 2 points 3 days ago

Yeah like stringing 5 chatgpt's together saying "you are scientist you are product lead engineer etc" is dumb but stringing together chatgpt into a coded tool into a vision model into a specific small time LLM is an interesting new way to build workflows for complex and dynamic tasks.