this post was submitted on 15 Mar 2026
25 points (80.5% liked)

Ask Lemmy

38559 readers
1993 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
all 29 comments
sorted by: hot top controversial new old
[–] Fedegenerate@fedinsfw.app 3 points 3 hours ago* (last edited 2 hours ago)

I think it'll end up like any industry with machine made options. There will be the a spectrum of products from 100% human made to majority machine made.

There will be a few bespoke artists doing interesting things for the wealthy and the passionate. But, for most of society, the mass produced stuff is fine.

Take clothes, how many of yours were hand made VS machine made. Cobblers are hand making shoes the world over, we're yours hand made. I have some hand knitted wool stuff (because I'm passionate about wool) but my Levi's are machine made. Shoes, motorcycle gear...

Furniture. There's cabinet makers the world over doing beautiful pieces of work, but I got most of my stuff from IKEA. How about you?

It'll end up like any other industry with machine made options. The bubble will burst, don't get me wrong, but after the .com bubble burst we still had the internet.

One of the top posts of fuckai right now is a bottle of olive oil, now I'm not tucking there yum. I just have different things I wanna do with my day than stare at someones olive oil bottle. Not better, I'm glad they have the free time and mental effort to do that, pondering mass produced labels is their jam, I support it. I just wanna do different things. I expect the world is going to want to do different things too.

[–] WormFood@lemmy.world 3 points 3 hours ago
  • machine learning models will continue to improve their output somewhat but gains will be incremental and the intrinsic problems with ml-derived content (e.g hallucinations, context window limitations, long-term coherency) will remain
  • open source models will catch up with commercial ones
  • the smaller ml companies (like openai and anthropic) will be absorbed, probably by Microsoft and Amazon
  • The increasing cost of hardware and energy will force companies to raise prices for ml subscriptions and eventually lock ml features behind paywalls
  • Computer parts will remain expensive for a long time
  • Programmers will collectively spend the next decade wrestling with the consequences of filling their codebases with millions of lines of ai generated code
  • Google images will never fully recover
[–] 87Six@lemmy.zip 1 points 3 hours ago

That we will all be forced to adopt it whether we like it or not, in scummy ways, and those that don't will be unrightfully seen as "boomers", when in reality they are the last people to genuinely do their work with love and care.

[–] ozymandias117@lemmy.world 13 points 18 hours ago (2 children)

LLMs are a dead end, and the massive amounts of money being wasted on them will make people too scared to invest in other forms of AI.

So we are currently at a local maxima that we won't overcome in 10 years. It will take much longer before we try a different approach to create "AGI," and the wasted money on LLMs will slow other forms of AI research, leaving us stagnating for >10 years

[–] timbuck2themoon@sh.itjust.works 1 points 15 hours ago

I think that all depends on what else there is to invest in. In general, as terrible as ai is, it's carrying the stock market. Investors need something else to turn to to divert away from AI.

[–] Modern_medicine_isnt@lemmy.world 1 points 16 hours ago (1 children)

I'm not convinced that investors would know the difference between a company trying to improve llms vs taking a new approach. So I don't think it will stifle investment in other forms of AI research.

I also don't think they are a dead end overall. They sure aren't likely to get to agi, but you don't need agi to be useful.

[–] ozymandias117@lemmy.world 2 points 15 hours ago (1 children)

You have to convince investors why your AI research won't hit a wall like LLMs are now - they've poisoned the term "AI"

They're a dead end, insofar as they do all they'll ever be able to; if you can find use for them at their current level, great, but it does not look likely they will be able to do more than they currently can

I dunno. Investors are still lined up to invest in AI startups from what I hear. But that isn't much evidence.

That said, individually, llms may have hit a wall. But there is plentynof room to optimize them, and lots of ways to combine them. Their uses are still i their infancy. Like take grafana. It doesn't support personal api keys. So I can't give the AI access to test and iterate on solutions yet. Lots of software is like that. The llm doesn't need to change. The software we use needs to support it. First with access, then with guardrails like fine grained access controls so we can trust that the ai can't do things we don't want it to. Then we can really experiment to found out what it can do.

And really, the answer to getting more out of AI is parralelism. So as they optimize it to make it less expensive, we will be able to use parralelism to get more out of it, without fundamentally changing it.

There is a lot of room to grow the uses of the current AIs while we wait for some totally new approach to come along and get us to AGI. We aren't ready for that now anyway. In 15 or 20 years, maybe we will be.

[–] Canopyflyer@lemmy.world 6 points 20 hours ago

The only AI companies that will exist in 10 years will be those started by a large company that has other unrelated profit streams. Such as Microsoft, Google, Amazon etc. All others will fail. Some will be bought by the big players if they develop a unique technology. Otherwise they all go broke.

If I had to guess, there will be only two major AI/LLM companies in existence. The nature of LLM's discounts that small companies and organizations can scale one to be profitable.

Micron comes back to the consumer market, but has to rebrand due to the ire of consumers for them being assholes. Same with Western Digital, although they have not "technically" left the consumer market.

The next 5 years will be spent by people trying to find SOMETHING for AI to do. Some very high end uses in research, or academics will be found. However, those will cost massive amounts of money and only available to governments, large corporations and academic institutions. Consumers will be left with creating images, music and a few other parlor tricks, but there will be nothing of any true value offered. In the mean time AI images and videos will be used to exacerbate the societal/ cultural issues across the globe, until the population becomes so jaded and cynical that this media looses efficacy. By that time enormous damage will have been done.

Consumers will also be left paying for the electricity, water, and other resources that the remaining data centers will consume.

I'm currently looking heavily into installing solar on my home, with a battery backup just because of these stupid data centers. It's just a matter of time that these things start causing issues on the grid.

[–] HetareKing@piefed.social 8 points 22 hours ago

Bubble will burst, many AI companies will go under, the ones that remain will have to price themselves out of reach of most people. Lack of investor confidence will trigger a third AI winter, which will affect even actual valuable uses of machine learning models and the further development of locally-run models. People who graduated college between 2023 and 202X will have a harder time getting a job. AGI will still be a far-off dream.

[–] Modern_medicine_isnt@lemmy.world 2 points 16 hours ago

Well, assuming some agi breakthrough doesn't happen (which would in my opinion require a vastly different approach than llms). We will see more of this ai swarm type stuff. Essentially you end up with a bunch of specialized ai's, and then some ai coordinators. The ai that we will talk to will just farm out the work to other AIs that will include ones specialized in verifying the work that the ai does.

Most people preAI did work that was say 60% implementation, 30% figuring out what needs to be done, and 10% verifying what was done. That will shift to 15% implementation, 50% requirements gathering, and 35% verification.
Obviously those number are just to show the shift, not intended to be an accurate representation of the current way our work is divided.

Overall, if you give ai a way to verify what it is doing, and let it iterate, it is far more useful than just telling it to do a thing or asking it a question.

[–] fizzle@quokk.au 7 points 22 hours ago

Improvement stagnates.

Venture capital availability reduces.

Mag 7 try to monetise to continue development.

Business adoption is tepid as long term heavy use reduces skills and productivity.

Some financial VC fund learns from a credible whistle blower that generative AI is not a pathway to AGI. Revalues their portfolio, enters administration.

The ensuing fallout triggers a global depression.

[–] Mr_Fish@lemmy.world 25 points 1 day ago (1 children)

Narrow AI well get better, even faster than normal because of the research that big AI companies are doing now, but attempts at more general AI will stop being profitable.

[–] 30p87@feddit.org 9 points 1 day ago* (last edited 1 day ago)

General "AI" is not profitable at all, even rn. Raising money is not making profit

[–] cronenthal@discuss.tchncs.de 10 points 1 day ago

LLMs will be a standard part of software tooling like IDEs, and people won't talk about them much anymore.

LLMs and image/video generation will be a standard part of adult entertainment.

[–] yessikg@fedia.io 2 points 19 hours ago

LLMs will go the way of NTFs. No AGI will exist yet

[–] sturmblast@lemmy.world 5 points 1 day ago

Bubble go burst

[–] raicon@lemmy.world 6 points 1 day ago

People hate LLMs because of their unreliability, and they are right. But AI is a much more vast field.

As soon as we have more reliable, causal and general intelligence, the opinions will change.

I personally believe that humans have no clue how limited our brain power is. So much so that there will be no AGIs. Only ASIs. Same thing that happened with chess bots.

[–] backalleycoyote@lemmy.today 2 points 22 hours ago

It will worm it’s way into more and more everyday interactions and the bulk of society will accept it like they did cameras everywhere, smart appliances, the digital tracking device in their pocket, and screens in their cars instead of physical controls. Avoiding it will become a lifestyle choice that takes effort, and the secondary market for decades old digital/analog technology will continue to grow.

[–] quediuspayu@lemmy.dbzer0.com 4 points 1 day ago

People will eventually learn where it is useful and it is not.

[–] jeena@piefed.jeena.net 3 points 1 day ago (3 children)

I predict software engineers won't go away, but coders will go away.

[–] chunes@lemmy.world 1 points 9 hours ago* (last edited 9 hours ago)

Judging by the quality of software, I'd say hackers went away at least 15 years ago.

[–] ViatorOmnium@piefed.social 10 points 21 hours ago* (last edited 20 hours ago)

LLMs are shit a doing large code changes, and fundamentally will always be shit at it, because they fundamentally can't reason. LLMs are good text completers, and that's their place in the IDE.

My prediction is that most well run organizations are going to push against coding agents soon. Look at the reports that even Amazon is now demanding that Senior Engineers review for AI code changes and take responsibility, which doesn't scale now, and will scale even less in the future if we train less coders.

[–] Mr_Fish@lemmy.world 4 points 1 day ago

That's already a thing in some areas of programming. Block programming, where you just drag and connect blocks, is very possible, especially in game development.

[–] hitmyspot@aussie.zone 1 points 1 day ago

Most interaction between people and computers will move from keyboard and mouse to spoken word.

[–] 65gmexl3@lemmy.world 1 points 1 day ago

Me: first robotic AI, but probably > 10yrs

[–] Iconoclast@feddit.uk 1 points 1 day ago (1 children)

Impossible to make predictions that far away. We could have AI models that are barely better than our current ones or we might be extinct with our AGI system already spreading out into the universe.

[–] Thedogdrinkscoffee@lemmy.ca 4 points 1 day ago

"It’s Difficult to Make Predictions, Especially About the Future"

~ apocryphal