Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
More capable than the crowd here lets on. My take is like this, unchecked capitalism is a danger to mankind. The pervasiveness of LLM’s right now is just a symptom of that. The rich are the problem, not the AI.
It is a tool; a very good one along many axes. I think people that think it isn’t good for writing code are misinformed or intentionally disingenuous. It is extremely good at that, but it is just a tool not a replacement.
But it is the applications in pure maths, virology, protein folding, etc. where it gets really interesting.
Water consumption, power consumption, and profit motives aside, they are fascinating tools.
That said, If Anyone Builds It, Everyone Dies is a fascinating take on how this could all go wrong.
In any case, I can’t understand the people that say stuff like, “It is just autocomplete on steroids,” or “it is just a probabilistic prediction tool.” Okay, but like… that’s all we are too.
Summary, interesting tools being used for profit at the expense of economies, the environment, and creative fields.
Whoever told you that was lying to you or misinformed. Neuroscientists do not look at the brain as a probabilistic prediction tool. You are not a database with weights, you’re a human being with experiences, emotions, and thoughts.
We are nearly precisely that. The brain functions as a massive, self-organizing neural network where cognitive architecture is determined by the strength of connections (the biological equivalent of adjustable computational weights) that modulate signal transmission via the flow of ions.
Every decision made or breath taken is the outcome of how ions flow through this network.
Let me know when you find a neurologist that says brains are just like LLMs.
That isn’t likely to happen. Fortunately, neither have I said that. But a pithy comeback won’t change the accuracy of the brain being a self-assembling probabilistic network. All your memories, experiences, and emotions are part of that.
Rewording a description of what an LLM is and saying brains are just like that is still saying that brains work like LLMs, even if you didn’t use those exact words. The acknowledgment that neurologists do not find evidence to support that is pretty much all that is necessary to tear that down, no matter how many times you repeat it.
If I say “A screwdriver is a tool,” and “The brain is a tool,” am I then saying “The brain is just like a screwdriver”? Or is it possible that applying seconding order logic to an admittedly and clearly reductive statement I made isn’t productive?
And which part of the brain description is inaccurate, specifically?
pithy hottakes is 90% of ai criticism
They literally can't do pure math. Like everyone knows how bad they are at even simple math. We have had tools that do pure math for thousands of years, and we call them calculators. A hotbox for an imaginative mathematician? Sure, but any conclusions drawn get drawn elsewhere with more traditional tools.
I hear this criticism of LLMs all the time and I just don't get it. They're language models, they take language inputs and produce language outputs. They aren't designed to do math. It's like complaining that a reciprocating saw can't do math.
There is active research right now for their use in pure maths. I don’t think it is primarily about direct solutions, but in program synthesis for formal logic. Keep in mind this isn’t just LLM’s, but also graph networks and other non-transformer networks.