Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
LLM's have now had a pretty decently long period of proving their worth. Which turned out to be very limited in scope and depth, at least compared to the promises given beforehand.
For example, it was predicted that it would be able to write and inject code into itself, generate data to train on for itself, not need any/minimal human intervention to do so. This clearly is impossible.
As a tool for people to use natural language to interact with software, it's proving to be quite effective.
As a tool for accurate dissemination of factual information it isn't reliable at all. And can't be made reliable, LLM'S are at least incapable of reliability at a fundamental level. As language in itself is a subjective human invention we describe the objective reality with, the objective reality is only known through perception. A LLM doesn't in fact perceive anything, it's not alive. So fundamentally LLMs can't know if they are actually being factual, this requires something more than language.
People who peddle AI bs, don't know, or wish to remain ignorant about, the fundamental limitations of language.