Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
A lot of ideas need huge investments to once become profitable. Imagine a new pharmaceutical that was found by a researcher. They have the capabilities to produce the substance in their lab in very small quantities, but it's not enough to sell it. As they don't have a lot of money by themselves, they need investments to buy a bigger lab, automate manufacturing etc. in order to scale the process. Then after a period of time, the product slowly becomes profitable and for the investors hopefully big time.
Now with AI the thought process is similar. You need huge data centers and gigantic computation facilities to train models with many billions of parameters to make a model that is even slightly useful. Their have been made huge investments into different AI companies, because this technology seems to be ground breaking and it is not clear yet, who will win the race.
Now stocks are pumped up and everybody is waiting for the breakthrough, the artificial general intelligence, called AGI. This concept is completely bullshit, but investors don't understand the technology, they are just greedy. If knowing that the token size of transformer models scale with n² was common sense, people would have already thrown the towel. Now what AI companies really need to do, is to shove AI down everyone's throat. They need to sell their models to every little business with the promise of increasing productivity largely. Companies believe the bullshitting and spend a lot of money on AI, although Harvard Business Review found out that workslop™ does in fact not increase productivity. In alignment with sunken cost fallacy, AI companies don't give up but increase their bullshitting game. They present agentic AI, - as a data scientist only writing down this term makes me cringe really hard.
As so much money has been pumped into this market, the stocks are overvalued through the roof, the GPU and storage market is broken, there is no way back. We don't know yet what the tech bros will invent in order to rescue their asses, but it is not sure at all this bubble will ever burst. So you better don't bet your ass on falling stocks.
The two are nothing alike. The pharmaceutical has a pre-determined market, and a known effect. A researcher who finds a treatment for Somethingitis will know in advance that people with the disease will want it. They will want it because the medication has a proven effect. Nobody has to hand out the cure under cost to get people enthusiastic about it (In fact, without proper controls, the exact opposite happens. See the USA)
LLMs are pretty much the opposite. They're a solution looking for a use, and are only very marginally successful in that. Nobody can say "this product will cause that effect", pretty much by definition.
That's why they're giving their product away, and massive subsidizing the use of it. If they stopped, nobody would use it. And every month, the models get more and more expensive even as the scale increases. Actual results are few and far between, except for very niche applications which won't recover the costs before the next millennium.
The best comparison I've seen is someone selling stale bread covered in gold leaf for ten bucks. Is there a market for it? Sure, decorative bread is on display with many bakers, and I'm sure you could sell some of it for croutons and such. But nobody is buying stale bread en masse. But if you sell stale bread for 5 cents, you bet your ass people will buy it. It might not be great, but come on, for 5 cents I'm willing to eat a lot of toast.
I agree with you and that is basically what I tried to say. I just used the pharmaceutical to explain the concept of investment, expectation, profit. I think both are in fact not the same, while investors think they are.
The concept isn't necessarily bullshit, the technology just isn't nowhere near there yet. Given our current level of understanding of human intelligence it probably won't be for a there for a very long time, but that doesn't invalidate the concept as a future goal. Companies currently working on AI products just seem incapable of being honest about that.
What's bullshit is the claim that today's "AI" - LLMs - could one day advance to AGI. That's really not possible if you understand how LLMs work. Could there be truly intelligent technology one day? Maybe. But the AI industry isn't really moving towards that, despite what they claim.
AGI might use LLM tech in their process, but LLMs by themselves aren't going to become aware. What happened is LLM tech became a gold mine, some who were doing AGI research jumped on it instead, and others followed. There is certainly still AGI research going on somewhere, but it's buried by the race to... something. The biggest problem I see, outside of the need for profit guiding all this, is that what they are building has become so complex they don't really understand it fully, they just keep finding ways to tack on things to get to some higher level without knowing why it works (or why it will break).
And while LLMs aren't AGI, they still have the issue of misalignment, even without a self-awareness. We've seen early on the misdirection to obtain a goal, and the models now are more sophisticated. Maybe it's not their own goal, but a misunderstood goal that they'll say and do anything to get to.
Good thing we're not putting them in control of important things, or full access to systems, right? Right?
Research info AGI had always been the domain of universities, not companies trying to get investments or profit. It's still going on but you'll only hear from it when there's a new development that some company tries to turn into profit.
Exactly. We both typed the same thought basically at the same time. It is the expection AGI was a logical consequence of LLMs that is driving this insane market.
People try to always frame it as AGI is the logical next expansion step of LLMs, but it is not. This is not a linear process and transformer based LLMs and the science fiction like goal of AGI just don't have much to do with each other.
I know. But that's very different from saying the whole concept is bullshit.
the whole concept assumes LLMs will reach some mythical enlightenment after feeding them exabytes of bullshit on the internet.
Classic case of garbage in, garbage out.
You are applying such an unusual definition to the word concept that I feel there's no point to this argument anymore.