For the yougun's, the people posting this stuff are the same people who posted all the same shit about crypto when it was $12,000. Be careful who you listen to just because its in a meme.
memes
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads/AI Slop
No advertisements or spam. This is an instance rule and the only way to live. We also consider AI slop to be spam in this community and is subject to removal.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
Therefore, in no case should AI solve such problems, since it can also make mistakes.
There's a lot of ink spilled on 'AI safety' but I think the most basic regulation that could be implemented is that no model is allowed to output the word "I" and if it does, the model designer owes their local government the equivalent of the median annual income for each violation. There is no 'I' for an LLM.
Its this type of kneejerk reactionary opinion I think will ultimately let the worst of the worst AI companies win.
Whether an LLM says I or not literally does not matter at all. Its not relevant to any of the problems with LLMs/generative AI.
It doesn't even approach discussing/satirizing a relevant issue with them.
It's basically satire of a strawman that thinks LLMs are closer to being people than anyone, even the most AI bro AI bro thinks they are.
No, it's pretty much the opposite. As it stands, one of the biggest problems with 'AI' is when people perceive it as an entity saying something that has meaning. The phrasing of LLMs output as 'I think...' or 'I am...' makes it easier for people to assign meaning to the semi-random outputs because it suggests there is an individual whose thoughts are being verbalized. It's part of the trick the AI bros are pulling to have that framing. Making the outputs harder to give the pretense of being sentient, I suspect, would make it less likely to be harmful to people who engage with it in a naive manner.