"Researchers find more defective chatbots that don't follow instructions because glorified text completion doesn't actually know or understand things."
It isn't evade or ignoring. It is a fucking sentence autocomplete on steroids.
This is a most excellent place for technology news and articles.
"Researchers find more defective chatbots that don't follow instructions because glorified text completion doesn't actually know or understand things."
It isn't evade or ignoring. It is a fucking sentence autocomplete on steroids.
And then companies will just feed it more wild data from the users thinking that it will fix it eventually
The language in the linked post is disinformation. AI does not "scheme," but that's the wording the post uses for its duration. "Scheming" implies competence from a person. This post is evidence of a dysfunctional piece of software failing to work properly, made by apparently increasingly incompetent developers.
Upon looking a little closer, this is a fearmonger website devoted to overinflating claims of AI power while ignoring real-life present-day harms. They claim to be inspired by Sam Bankman-Fried's Effective Altruism scam. They show pictures of beautiful beaches but fail to mention AI's environmental harms. Their paranoid demands, if enacted, would calcify Big Tech's monopoly on AI and help nobody affected by its abuses on the planet.
thanks for this reply to the post and the clarification! The name of the website contains "longterm", possibly in reference to "longtermism", another framing of the effective altruism scam used to justify killing people today for some nebulous and flimsy "longterm minimising of deaths" because they assume their shitty text-predict machine will be a superior intelligence somehow.
Anyways, it's important to know their language to detect their bullshit quickly.
They dont lol
Pretty much always this is just the fact cheaper, especially free, chatbots, have very limited context windows.
Which means the initial restrictions you set like "dont do this, dont touch that" etc get dropped, the LLM no longer has them loaded. But it does have in the past history the very clear and urgent directives of it trying to do this task, its important, so it'll do whatever it autocompletes its gotta do to accomplish the task. And then... fucks something up.
When you react to their fuck up, it *reloads the context back in
So now the LLM has in its history just this:
So now the LLM is going to autocomplete its generated text on top being very apologetic and going on about how it'll never happen again.
Thats all there is to it.
Cheap fuckers cheaping out, shocker (context is (V)RAM). AI speedrunning enshittification, who'd of thunk.
Uh... no its just the free models being free, theyre lower cost intentionally to provide free options for people who dont wanna pay subscription fees.
(context is (V)RAM)
Eh sort of, its more operating costs, the larger the context size the more expensive the model is to run, literally in terms of power consumption.
Keep in mind we are on the scale of fractions of cents here, but multiply that by millions of users and it adds up fast.
But the end result is that the agent will fuck stuff up, and will even quickly /forget/ it fucked that up if you dont catch it asap
A lot of them have a context window that can be wiped out within like, 2 minutes of steady busywork...
I love how your response to the catastrophic results of stupidly trusting ai is "pay more money to ai companies".
Sane person's response: don't trust llms.
What are you talking about.
No? I never said that.
I just explained /why/ it happened, I literally nowhere in my post said, or implied, someone should pay for more expensive models. What are you smoking?
You just have to be aware they have very short memory when using a cheap model and assume anything you wrote 1 minute ago has already left its memory, which is why they produce pretty dumb output if you try and depend on that... so... dont depend on that.
Everyone else who has any sense: llms are shit and you shouldn't trust them with executive power.
You: just the cheap ones.
Me: no, all of them. What kind of lunatic trusts control of anything important to a fundamentally stochastic process?
You: just the cheap ones
I never said that. I just said that the cheap ones are especially shitty.
People on this site really lack reading comprehension it seems.
no its just the free models...
You just have to be aware... when using a cheap model
You: just the cheap ones
I never said that.
Ohhhhhhhhh ok yes of course you never said or implied that. Not your repeated message at all. And yet you can't keep away from adressing your criticism towards free or cheap LLMs! It's like your subtext or your underlying belief is that of you just pay big tech enough money and they can just build a big enough set of server farms, it'll be ok. No, it will not be ok and the enshittification has begun from an already shitty base point.
All LLMs are shit, the cheap and free ones are indeed just easier to spot as generating shit, if you ask them about things you know about. But you have to accept that they're ALL shit and STOP making get out clauses for the expensive ones by firing your criticisms exclusively at the cheap or free ones.
Giving ANY LLM executive power over your data is A BIG MISTAKE because you're putting your data in the control of something which operates, at its heart, as a random number generator. They're trained to sound right. People trust them because they sound right. This is a fundamental error.
The only people who have these issues, are people who are using the tools wrong or poorly.
Using these models in a modern tooling context is perfectly reasonable, going beyond just guard rails and instead outright only giving them explicit access to approved operations in a proper sandbox.
Unfortunately that takes effort and know-how, skill, and understanding how these tools work.
And unfortunately a lot of people are lazy and stupid, and take the "easy" way out and then (deservedly) get burned for it.
But I would say, yes, there are safe ways yo grant an llm "access" to data in a way where it does not even have the ability to muck it up.
My typical approach is keeping it sandbox'd inside a docker environment, where even if it goes off the rails and deletes something important, the worst it can do is cause its docker instance to crash.
And then setting up via MCP tooling that commands and actions it can prefer are explicit opt in whitelist. It can only run commands I give it access to.
Example: I grant my LLMs access to git commit and status, but not rebase or checkout.
Thus it can only commit stuff forward, but it cant even change branches, rebase, nor push either.
This isnt hard imo, but too many people just yolo it and raw dawg an LLM on their machine like a fuckin idiot.
These people are playing with fire imo.
You'll be the 4753rd guy with the oops my llm trashed my setup and disobeyed my explicit rules for keeping it in check.
You know programmers who use llms believe they're much more productive because they keep getting that dopamine hit, but when you actually measure it, they're slower by about 20%.
You appointed yourself boss over a fast and plausible intern who pastes and edits a LOT of stack overflow code, but never really understands it and absolutely is incapable of learning. You either spend almost all of your time in code review now for your stupid sycophantic llm interns who always tell you you're right but never learn from you, or you're checking in vast quantities of shit to your projects.
You know really subtle, hard to find bugs on rare cases that pass your CI every single time? Or ones that no one in their right mind would have made, but yet they compile and look right at first glance. They're now your main type of bug. You are rotting your projects with your random number generator.
And you think that all the money you're playing for your blagging llms protects you from them fucking up everything for you. But it doesn't. And you'll also find that your contract with your llm supplier expressly excludes them from any liability whatsoever arising from you using it instead pre-blaming you for trusting it.
You’ll be the 4753rd guy with the oops my llm trashed my setup and disobeyed my explicit rules for keeping it in check
Read what I wrote.
Its not a matter of "rules" it "obeys"
Its a matter of literally not it even having access to do such things.
This is what Im talking about. People are complaining about issues that were solved a long time ago.
People are running into issues that were solved long ago because they are too lazy to use the solutions to those issues.
We now live in a world with plenty of PPE in construction and people are out here raw dogging tools without any modern protection and being ShockedPikachuFace when it fails.
The approach of "Im gonna tell the LLM not to do stuff in a markdown file" is tech from like 2 years ago.
People still do that. Stupid people who deserve to have it blow up in their face.
Use proper tools. Use MCP. Use a sandbox environment. Use whitelist opt in tooling.
Agents shouldn't even have the ability to do damaging actions in the first place.
Ah yes, lovely mcp. Lovely anthropic mcp. Make sure you give anthropic lots of money and use their tools and then you'll be completely safe plugging the output of the llm into the os. Definitely fine yes.
I bet you your contract with them says they're not liable for shit their llm does to your files, your environment or your repositories, mcp or no mcp.
Fool.
Lovely anthropic mcp. Make sure you give anthropic lots of money and use their tools
Its becoming clear you have no clue wtf you are talking about.
Model Context Protocol is a protocol, like http or json or etc.
Its just a format for data, that is open sourced and anyone can use. Models are trained to be able to invoke MCP tools to perform actions, and anyone can just make their own MCP tools, its incredibly simple and easy. I have a pretty powerful one I personally maintain myself.
Anthropic doesnt make any money off me, in fact, I dont use any of their shit, except maybe whatever licensing fees microsoft pays to them to use Claude Sonnet, but microsoft copilot is my preferred service I use overall.
I bet you your contract with them says they’re not liable for shit their llm does to your files
Setting aside the fact that I dont even use anthropic's tools, my copilot LLMs dont have access to my files either. Full stop.
The only context in which they do have access to files is inside of the aforementioned docker based sandbox I run them inside of, which is an ephemeral immutable system that they can do whatever the fuck they want inside of because even if they manage to delete /var/lib or whatever, I click 1 button to reboot and reset it back to working state.
The working workspace directory they have access to has readonly git access, so they can pull and do work, but they literally dont even have the ability to push. All they can do is pull in the stuff to work on and work on it
After they finish, I review what changes they made and only I, the human, have the ability to accept what they have done, or deny it, and then actually push it myself.
This is all basic shit using tools that have existed for a long time, some of which are core principles of linux and have existed for decades
Doing this isnt that hard, its just that a lot of people are:
The concept of "make a docker image that runs an "agent" user in a very low privilege env with write access only to its home directory" isnt even that hard.
It took me all of 2 days to get it setup personally, from scratch.
But now my sandbox literally doesnt even expose the ability to do damage to the llm, it doesnt even have access to those commands
Let me make this abundantly clear if you cant wrap your head around it:
And it wasnt even that hard to do
Congratulations on responding to the first paragraph of his post. https://lemmy.world/post/44873477/23080810 (The one that made you super cross. Sure nothing from your sandbox ever makes it into production. Great. Very wise and very careful.)
No congratulations on responding to any of the rest of what I said.
You know programmers who use llms believe they’re much more productive because they keep getting that dopamine hit, but when you actually measure it, they’re slower by about 20%.
Everyone keeps citing this preliminary study and ignores:
Its the equivalent of taking 12 seasoned carpenters with very little experience on industrial painting, handing them industrial grade paint guns that are misconfigured and uncalibrated, and then asking them to paint some of their work and watching them struggle... and then going "wow look at that industrial grade paint guns are so bad"
Anyone with any sense should look at that and go "thats a bogus study"
But people with intense anti-ai bias cling to that shoddy ass study with such religious fervor. Its cringe.
Every professional developer with actual training and actual proper tooling can confirm that they are indeed tremendously more productive.
Every professional developer with actual training and actual proper tooling can confirm that they ~~are~~ feel indeed tremendously more productive.
ftfy
The difference, when the tool is used correctly, is so massive that only someone deeply uninformed or naive would contend it.
I got about 4 entire days worth of work completed in about 5 hours yesterday at my job, thats just objective fact.
Tasks that used to take weeks now take days, and tasks that used to take days now take hours. Theres no "feeling" about this, Ive been a software developer for approaching 17 years now professionally. I know how long it takes to produce an entire gambit of integration tests for a given feature. I spend almost all of my time now reviewing mountains of code (which is fairly good quality, the machines produce fairly accurate results), and then a small amount of time refining it.
People deeply do not at all understand how dramatically the results have changed over the past 2 years, and their biases are based on how things were 2 years ago.
Sure, 2 years ago the quality was way worse, the security was bad, the enforcement almost non existent, and peoples overall skill with how to use the tools was just beginning to grow. You cant exactly be good at using a tool that only just came out.
But its been two years of very rapid improvement. Its good now. Anyone who has been using these tools and actually monitoring progression can speak to this.
Things heavily shifted about 5 months ago when competition started to really fire up between different providers, and I wont say its even close to great yet, but its definitely good, it works, its fast, and it's pretty damn good at what I need it to do.
Thats all there is to it.
Not really. Even with (theoretical) infinite context windows, things would end up getting diluted. It's a statistic machine; no matter how complex we make them look. Even with all the safeguards in place, as these grows larger and larger, each "directive" would end up being less represented in the next token.
People can keep trying to hammer with a screwdriver all they want and keep being impressed when the bent nail is almost flush, though. I'm just enjoying the show from the side at this point.
Very true, though theres a certain threshold you can get past where the context, at least, is usable in size where the machine can at least hold enough data at once for common tasks.
One of the pieces of tech we are really missing atm is an automation of being able to filter info.
Specifically, for the LLM to be able to "release" info as it goes asap as unimportant and forget it, or at least it gets stored into some form of long term storage it can use a tool to look up.
But for a given convo the LLM can do a lot of reasoning but all that reasoning takes up context.
Itd be nice if after it reasons, it then can discard a bunch of the data from that and only keep what matters.
This eould tremendously lower context pressure and allow the LLM to last way longer memory wise
I think tooling needs to approach how we manage LLM context in a very different way to make further advancement.
LLMs have to be trained to have different types of output, that control if they'll actually remember it or not.
It's not just cheap agents. I've witnessed paid MS Copilot give a decade old depreciated Microsoft product in response to a single sentence prompt, then when called out a non-existent Microsoft product, then finally giving the right answer after being called out a second time.
LLMs are not good at answering fact based questions, fundamentally. Unless its an incredibly well known answer that has never changed (like a math or physics question), they dont magically "know" things.
However, they're way better at summarizing and reasoning.
Give them access to playwright web search capability via MCP tooling to go research info, find the answer(s), and then produce output based on the results, and now you can get something useful.
"Whats the best way to do (task)" << prone to failure, functional of how esoteric it is.
"Research for me the top 3 best ways to do (task), report on your results and include your sources you found" << actually useful output, assuming you have something like playwright installed for it.
A user on here built what appears to be a layer over the LLM that runs the query through several other processes first in an attempt to answer the question before it gets to the LLM, and I think it's brilliant.
They get bonus points because they made it so the reasoning the LLM uses is given to you. Although I haven't fully gone through the documentation yet.
Just as I have previously instructed.
"My chatbot deleted my email!"
"Our chatbot, comrade"
Lol. Lmao, even.
Maybe, just maybe, don’t let your chat bot make executive decisions independently.
Why the candlestick chart?
Because of all the investments lol

...I'm sorry, Dave, I cannot do that...
Hello Skynet my old friend...
would you like to play a game?
The irony is that this is like Skynet, but if it had Alzheimer's.
I'm sure this will be fixed with an ever increasing context window and more "plz be nice" inserted left and right.
More or less than the employees?
Sounds more like “Media find anti-AI angle that helps them get paid more for ad impressions”.