I've been wondering if Stremio has been doing this they had no updates for years and now at least one every week that breaks more things. I'm on the old one just watching the chaos unfold.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Microslop played the long game when they bought github
Eh. I never considered myself some hard-core old professional, but:
The LLM will not interact with the developers of a library or tool, nor submit usable bug reports, or be aware of any potential issues no matter how well-documented
If an LLM introduces a dependency, I will sure as hell go see it myself. Enough people do not do that for this to become a problem?
There's a term called "dependency hell". Sure, this one dependency is fine, but it depends on 3 other libraries, those 3 depend on a sum of 7 others, etc...
It's exacerbated by "oh this library is updated for no reason than its version is newer so we need to force that bleeding edge on any ecosystem we're in" thinking.
We've absolutely lost the careful, measured long-term release and maintenance cadence that we built the Internet on.
Compare Systemd.
The worst dependency hell is when a library has a strict version dependency, and another library uses that same dependency. When the second library updates their minimum version of the dependency to one that is higher than the exact version needed for the first, THAT'S dependency hell.
This wouldn't be a problem if libraries didn't frequently make breaking changes to their api.
"Move fast and break things" is for startups with no userbase, not libraries with millions of users.
ever heard of node.js?
Heard, not used though. Jokes about isEven(tm) too, but I never thought it goes like this in anything intended for external use
there's at least one guy i know of on github whose claim to fame is he finds code in existing node codebases by big corpos that's duplicated, breaks it out into a library, then PRs the original codebase with "instead of doing manually, switch to depending on this library", then adds to his profile "my code is used by ". he had thousands of libraries like that last i checked, most of them less than ten lines of code. the manifest and other boilerplate is way larger than the actual code.
Damn. isEven come alive. But hilarious enough to watch someone do it :)
Your node_modules directory can get so bloated that the community came up with different package managers just for deduplication! pnpm, for example, makes one global-adjacent cache, and then just symlinks the dependencies as needed. This is because the regular npm doesn't, because what if the package changed between the 20ms since I downloaded it for nuxt? (Sorry Nuxt users, had to pick a name)
Given an example from another reply... yeah. Things are fucked
Interesting. I thought this will be another post about slop PRs and bug reports but no, it's about open source project not being promoted by AI and missing on adoption and revenue opportunities.
So I think we definitely see (and will see more) 'templatization' of software development. Some ways of writing apps that are easy to understand for AI and are promoted by it will see wider and wider adoption. Not just tools and libraries but also folder structures, design patterns and so on. I'm not sure how bad this will be long term. Maybe it will just stabilize tooling? Do we really need new React state management library every 6 months?
Hard to tell how will this affect the development of proper tools (not vibe coded ones). Commercial tools struggling to get traction will definitely suffer but most of the libraries I use are hobby projects. I still see good tools with good documentation getting enough attention to grow, even fairly obscure ones. Then again, those tools often struggle with getting enough contributors... Are we going to see a split between vibe coded template apps for junior devs and proper tools for professionals? Will EU step in and found the core projects? I still see a way forward so I'm fairly optimistic but it's really hard to predict what will happen in a couple of years.
I am building a commercial application in my free time and I can definitely see evidence of this templatization. There are things that are very common in C# developer's implementations which I deliberately don't want to do. The AI will do it with reckless abandon. I can tell it not to, but it sneaks back in.
OSS library funding has been a huge issue in general. I really think the companies that have trillion dollar market caps can fund the development of top libraries but they just don't.
Only until AI investor money dries up and vibe coding gets very expensive quickly. Kinda how Uber isn't way cheaper than a taxi now.
You say "dries up" like that wasn't always the end goal for rideshare apps. Disrupt, overtake, starve out, hike prices.
With Uber that was indeed the plan and it worked. The same plan was there for AI, but AI isn't doing so well on the whole overtake and starve out thing. They'll have to jump directly to hiking prices. So it's only kinda like Uber.
This.
until AI investor money dries up
Is that the latest term for "when hell freezes over"?
Unless I misunderstood, it will eventually dry up? Investors aren't going to be willing to give money with no returns indefinitely
Microsoft steeply lowered expectations on the AI Sales team, though they have denied this since they got pummelled in their quarterly and there's been a lot of news about how investors are not happy with all the circular AI investments pumping those stocks. When the bubble pops (and all signs point to that), investors will flee. You'll see consolidation, buy-outs, hell maybe even some bullshit bailouts, but ultimately it has to be a sustainable model and that means it will cost developers or they will be pummeled with ads (probably both).
A Majority of CEOs are saying their AI spend has not paid off. Those are the primary customers, not your average joe. MIT reports 95% generative AI failure rate at companies. Altman still hasn't turned a profit. There are Serious power build-out problems for new AI centers (let alone the chips needed). It's an overheated reactionary market. It's the Dot Com bubble all over again.
There will be some more spending to make sure a good chunk of CEOs "add value" (FOMO) and then a critical juncture where AI spending contracts sharply when they continue to see no returns, accelerated if the US economy goes tits up. Then the domino's fall.
Hah, they wish. It's a business, and they need a return on investment eventually. Maybe if we were in a zero interest rate world again, but even that didn't last.
I wouldn't be surprised if that's only a temporary problem - if it becomes one at all. People are quickly discovering ways to use LLMs more effectively, and open source models are starting to become competitive with commercial models. If we can continue finding ways to get more out of smaller, open-source models, then maybe we'll be able to run them on consumer or prosumer-grade hardware.
GPUs and TPUs have also been improving their energy efficiency. There seems to be a big commercial focus on that too, as energy availability is quickly becoming a bottleneck.
It's not going to be enough to spend thirty thousand dollars a year per person on it, though, so the current first mover corps are still fucked. I agree that the tech itself has huge possibilities, just not the pets.com ass bullshit that is currently being pushed.
So far, there is serious cognitive step needed that LLM just can't do to get productive. They can output code but they don't understand what's going on. They don't grasp architecture. Large projects don't fit on their token window. Debugging something vague doesn't work. Fact checking isn't something they do well.
They don't need the entire project to fit in their token windows. There are ways to make them work effectively in large projects. It takes some learning and effort, but I see it regularly in multiple large, complex monorepos.
I still feel somewhat new-ish to using LLMs for code (I was kinda forced to start learning), but when I first jumped into a big codebase with AI configs/docs from people who have been using LLMs for a while, I was kinda shocked. The LLM worked far better than I had ever experienced.
It actually takes a bit of skill to set up a decent workflow/configuration for these things. If you just jump into a big repo that doesn't have configs/docs/optimizations for LLMs, or you haven't figured out a decent workflow, then they'll be underwhelming and significantly less productive.
(I know I'll get downvoted just for describing my experience and observations here, but I don't care. I miss the pre-LLM days very much, but they're gone, whether we like it or not.)
It actually takes a bit of skill to set up a decent workflow/configuration for these things
Exactly this. You can't just replace experienced people with it, and that's basically how it's sold.
Yep, it's a tool for engineers. People who try to ship vibe-coded slop to production will often eventually need an engineer when things fall apart.
This sounds a lot like every framework, 20 years ago you could have written that about rails.
Which IMO makes sense because if code isn't solving anything interesting then you can dynamically generate it relatively easily, and it's easy to get demos up and running, but neither can help you solve interesting problems.
Which isn't to say it won't have a major impact on software for decades, especially low-effort apps.
So far, there is serious cognitive step needed that LLM just can't do to get productive. They can output code but they don't understand what's going on. They don't grasp architecture. Large projects don't fit on their token window.
There's a remarkably effective solution for this, that helps both humans and models alike - write documentation.
It's actually kind of funny how the LLM wave has sparked a renaissance of high-quality documentation. Who would have thought?
Funnily enpugh, AI itself is a great tool to create that high quality documentation fairly efficiently, but obviously not autonomously.
Even complex systems can be documented up to a level that is easy and much less laborious for the subject experts and architects to comb through for fhe final version.
High-quality documentation assumes there's someone with experience working on this. That's not the vibe coding they're selling.
Vibe coding is a black hole. I've had some colleagues try and pass stuff off.
What I'm learning about what matters is that the code itself is secondary to the understanding you develop by creating the code. You don't create the code? You don't develop the understanding. Without the understanding, there is nothing.
Yes. And using the LLM to generate then developing the requisite understanding and making it maintainable is slower than just writing it in the first place. And that effect compounds with repetition.
LLMs definitely kills the trust in open source software, because now everything can be a vibe-coded mess and it's sometimes hard to check.
LLMs definitely kills the trust in ~~open source~~ software, because now everything can be a vibe-coded mess and it's sometimes hard to check.
Might make open source more trustworthy, It can't be any harder to check than closed source.
yeah it's to the point now where if I see emojis in the readme.md on the repo I just don't even bother.
I used to use emojis in my documentation very lightly because I thought they were a good way to provide visual cues. But now with all the people vibe coding their own readme docs with freaking emojis everywhere I have to stop using them.
Mildly annoying.
If the abominable intelligence is killing every corner of things we consider good its time to start killing the "AI"...
The killing part is not necessarily people vibe coding programs into OSS projects, but even if the OSS itself is not vibe coded, people using AI to integrate with it will result in lower engagement and thus killing the ecosystem:
Together, these patterns suggest that AI mediation can divert interaction away from the surfaces where OSS projects monetize and recruit contributors.
From Section 2.3 of the reported paper.
Open source is not only about publishing code: it's about quality, verifiable, reproducible code at work. If LLMs can't do that, those "vibe coding" projects will hit a hard wall. Still, it's quite clear they badly impact the FOSS ecosystem.