this post was submitted on 06 Apr 2026
717 points (98.8% liked)
People Twitter
9819 readers
967 users here now
People tweeting stuff. We allow tweets from anyone.
RULES:
- Mark NSFW content.
- No doxxing people.
- Must be a pic of the tweet or similar. No direct links to the tweet.
- No bullying or international politcs
- Be excellent to each other.
- Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician. Archive.is the best way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm going to assume you're saying this in good faith. The problem with handing thinking over to a computer is not just about computers being worse thinkers, it is also about the fact that these computer systems are being conditioned to reflect the views of the organizations that created them. This creates a concentration of power issue as it's another avenue to influence how people think, and it's a pretty strong one at that if people are literally handing over their thinking. This problem is likely to get worse over time as selling this influence in the same way much of the internet sells ad space will likely be quite profitable, and we're probably not seeing it as much now because AI companies are trying to get their LLMs integrated into society so people become dependent on them.
Targeted LLM labotomization turns out to be very difficult. You can still get Grok to shit on Musk.
But you need to spend effort to do that, and if you don't know the actual truth and realize grok doesn't provide that, how would you do that?
I haven't used grok personally, but on gemini it's not too hard to get it to shit on the oligarchs. Even basically got it to admit killing Trump would be a net positive to society without much effort.
I do agree in principle that LLMs work much better in cases where you can verify the output quickly but getting there would be difficult, so NP problems basically.
I'm in a weird position with LLMs because I have found them absolutely invaluable as a learning tool, but also recognize how much damage they could do to society, especially in the hands of dumber people when it comes to propagandization.
And people aren't? Have you spoken with a Trump supporter recently? They are far more programmed than any modern AI engine. I'd take any modern AI programming them over whoever's currently doing it.
I do agree with you that this will probably be a problem in the future, but for the time being, for those people at least, I do think it's a net positive.
I did say that it's another avenue to influence how people think. Even if it were a small net positive right now, which I would argue that it's not but I digress, it's only serving to strengthen people's dependence and trust of the systems which will overwhelmingly likely be used to control them in the future.