this post was submitted on 17 Apr 2026
0 points (50.0% liked)

Explain Like I'm Five

20953 readers
1 users here now

Simplifying Complexity, One Answer at a Time!

Rules

  1. Be respectful and inclusive.
  2. No harassment, hate speech, or trolling.
  3. Engage in constructive discussions.
  4. Share relevant content.
  5. Follow guidelines and moderators' instructions.
  6. Use appropriate language and tone.
  7. Report violations.
  8. Foster a continuous learning environment.

founded 2 years ago
MODERATORS
 

We all know there's a lot of hype and skepticism around AI, and over the last year or so I've been hearing a lot about "Agentic" AI. I've struggled to get a real grasp on what that means without working examples; however, I've began to see hints of something. Videos mocking coders who are scrolling their phones while waiting for the AI to complete a task. Peers claiming Claude but not GPT can do complex reasoning and planning. Not much, but enough for me to stop ignoring the term as purely buzz word.

Agentic AI is defined as "an autonomous systems that act independently to achieve complex, multi-step goals without continuous human oversight." This seems fanciful, but my basic understanding is that these Agentic systems are do the large scale reasoning then use other apps to achieve smaller sub-goals. Essentially these systems allow for pipelines to be set up as verbal lists of tasks then they work their way through the tasks with some perhaps limited problem solving. A crucial aspect of this seems to be that if you give the bot more tools it can do more and handle more failures. Sometimes more tools means a text book or document on your work to help it reason and plan. Sometimes more tools means writing a script for it to use in future analyses.

Now, while these sound mildly interesting, they're essentially useless if they're locked behind a pay wall. I'm not paying some company to think poorly for me. Someone else's tools are not an extension of my skills or personal power since I'd be neither able nor willing to build on them. However, the notion of Local Agentic AI changes this. If it's on my computer even if I don't fully understand what it's doing, I can build on it. I can control it and treat it as an extension of myself -- as humans do with all tools.

I'm a modest coder, and even the basic AI has expanded my abilities there just by helping me find algorithms I wouldn't have known how to find before. I have ran Local LLMs, but I've not tried these Agentic LLMs. I worry I was unimpressed too quickly, and gave up on a potentially useful tool. If I can tell the local agent to make a rough version of a function that does XXXX, then I can get more done. If I can tell it to write a simple script that makes this table that I'd normally just do by hand, check the script, then link that scipt to a command for the task I wouldn't normally trust the AI with then the AI can do a larger chunk of my work. The more scripts I make, the more the AI can do. The more scripts I download from open source communities, the more the AI can do. I don't have to trust the AI if all it's doing is moving information around and triggering scripts. I just have to check the scripts. If we start adding in robotics... yeah, I can see the hype.

Of-course, the counter argument is that we've had IFTTT triggers and pipelines for decades. So maybe this isn't fundamentally new, but is it still an impetus to download more tools and build more pipelines? Will I fall behind if I don't figure out how to use this efficiently and effectively (FOMO)? Does anyone here have experience with Agentic LLMs (especially local)? Also, what's the best Lemmy community for learning more about this sort of thing and maybe also hooking it up to basic robots?

you are viewing a single comment's thread
view the rest of the comments
[–] Lojcs@piefed.social 5 points 6 days ago

To me agentic ai seems to be a futile attempt at making llms useful in a work context. The idea of having virtual workers who will accomplish tasks and lift their own weight seems appealing until you realize not even hiring actual human workers increases throughput until they can get their bearings. Tools that consistently and accurately do repetitive things is more valuable for an individual than an open ended tool with the potential to solve it all in one go imo.

I find it hard to believe that llms trying to cover up for their weaknesses with increasingly token intensive methods like thinking or planning will stay economically viable after the "capture the market" phase of the ai industry. It is remarkable that such methods work at all. I can't imagine there's nearly enough training data about non-final work or thought processes or planning that went behind producing something, not to mention people might not accurately describe how they reached their solution even if they try to. And even if they manage to print those thoughts into their context, llms don't produce words through a thought process so it's dubious how much benefit they can ultimately obtain.

I think once the ai craze is over people might make tools that use machine learning to automate tasks but I don't think the repackaged chatbots are it.

I've tried agentic coding using a bunch of llms from ollama couple weeks ago, most couldn't manage to consistently find the correct file, glm4.7 got pretty far but lost context and produced some irrelevant code.