this post was submitted on 08 Feb 2026
210 points (98.2% liked)
Technology
80859 readers
2949 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's important to note every other form of AI functions by this very basic principle, but LLMs don't. AI isn't a problem, LLMs are.
The phrase "translate the word 'tree' into German" contains both instructions (translate into German) and data ('tree'). To work that prompt, you have to blend the two together.
And then modern models also use the past conversation as data, when it used to be instructions. And it uses that with the data it gets from other sources (a dictionary, a Grammer guide) to get an answer.
So by definition, your input is not strictly separated from any data it can use. There are of course some filters and limits in place. Most LLMs can work with "translate the phrase 'dont translate this' into Spanish", for example. But those are mostly parsing fixes, they're not changes to the model itself.
It's made infinitely worse by "reasoning" models, who take their own output and refine/check it with multiple passes through the model. The waters become impossibly muddled.