this post was submitted on 29 Aug 2025
10 points (81.2% liked)
change my view
286 readers
13 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Large Language Models are not suitable for decision making roles. The majority of the work in software development involves making decisions.
Language translation is ALL decision making though?
https://www.paideiainstitute.org/the_creative_art_of_translation
https://www.catranslation.org/feature/6-great-introductions-to-the-art-of-translation/
https://wordswithoutborders.org/read/article/2004-07/how-to-read-a-translation/
Arguably, sure. I assert that LLMs are a terrible choice for translating anything which matters though, largely for that reason
No, it is not.
What is it then?
Translation.
Is a cashier in a decision making role when they "decide" what buttons to press on the cash register, given an existing basket of products?
This is not how translation works, you can't reduce it to a simple table lookup for similar words and just replace them and call it done.
That is a poor example to compare to.
No, it is how translation works. You didn't answer the question. Is the cashier "making decisions"? The analogy is apt.
No, tallying the price of items is a process of looking up each item's price in a table and retrieving it.
There is always only ever one possible, perfect answer in this process and thus it is utterly unlike language translation at all and honestly it is alarming you can't see the difference.
The quality of having one possible answer does not actually make them different, but it does get my point across very nicely. The cashier is literally translating a context into a sequence/signal, which is identical to the task of language translation.
You just restated your same argument with the same fatal flaw I directly pointed out previously.
Because it is correct, yes. I figured maybe you would have understood if I "translated" it into a different wording? 😉
Yes maybe I would that is my point about the difference between looking up values in a table and translation.
For a third time: Is the cashier “making decisions”?
It's clear that you know the answer is "no", and it is very telling that you refuse to state this and continue to dodge. Hmm, I wonder why that might be...
Bad news for this research team in that case, I wonder if they’ve seen your whitepaper yet?
https://hbr.org/2024/09/ai-can-mostly-outperform-human-ceos
Bad comparison, CEOs are also not suitable for decision making roles
Hah, fair enough
And yet llm (or what people call AI) can't run a vending machines business.
https://www.anthropic.com/research/project-vend-1
“Can’t” is a strong word, it ran a business - some might say better than some CEOs heh