this post was submitted on 14 Jan 2026
50 points (93.1% liked)

Ask Lemmy

36652 readers
1062 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

To be honest, I think we're losing credibility. I don't know what else to put in the description.

you are viewing a single comment's thread
view the rest of the comments
[–] tal@lemmy.today 1 points 1 day ago* (last edited 1 day ago) (1 children)

“Write code without learning it!” I get it. I’ve struggled learning to program for 10 years. But every time I hear a programmer talk about AIGen code, it’s never good, and my job’s software has gotten less stable as AIGen code as been added in.

I'm similarly dubious about using LLMs to do code. I'm certainly not opposed to automation


software development has seen massive amounts of automation over the decades. But software is not very tolerant of errors.

If you're using an LLM to generate text for human consumption, then an error here or there often isn't a huge deal. We get cued by text; "approximately right" is often pretty good for the way we process language. Same thing with images. It's why, say, an oil painting works; it's not a perfect depiction of the world, but it's enough to cue our brain.

There are situations where "approximately right" might be more-reasonable in software development. There are some where it might even be pretty good


instead of manually-writing commit messages, which are for human consumption, maybe we could have LLMs describe what code changes do, and as LLMs get better, the descriptions improve too.

This doesn't mean that I think that AI and writing code can't work. I'm sure that it's possible to build an AGI that does fantastic things. I'm just not very impressed by using a straight LLM, and I think that the limitations are pretty fundamental.

I'm not completely willing to say that it's impossible. Maybe we could develop, oh, some kind of very-strongly-typed programming language aimed specifically at this job, where LLMs are a good heuristic to come up with solutions, and the typing system is aimed at checking that work. That might not be possible, but right now, we're trying to work with programming languages designed for humans.

Maybe LLMs will pave the way to getting systems in place that have computers do software engineering, and then later we can just slip in more-sophisticated AI.

But I don't think that the current approach will wind up being the solution.

“Summarize a book!” I am doing this for fun, why would I want to?

Summarizing text


probably not primarily books


is one area that I think might be more useful. It is a task that many people do spend time doing. Maybe it's combining multiple reports from subordinates, say, and then pushing a summary upwards.

“Generate any image!” I get the desire, but I can’t ignore the broader context of how we treat artists. Also the images don’t look that great anyway.

I think that in general, quality issues are not fundamental.

There are some things that we want to do that I don't think that the the current approaches will do well, like producing consistent representations of characters. There are people working on it. Will they work? Maybe. I think that for, say, editorial illustration for a magazine, it can be a pretty decent tool today.

I've also been fairly impressed with voice synth done via genAI, though it's one area that I haven't dug into deeply.

I think that there's a solid use case for voice query and response on smartphones. On a desktop, I can generally sit down and browse webpages, even if an LLM might combine information more quickly than I can manually. Someone, say, driving a car or walking somewhere can ask a question and have an LLM spit out an answer.

I think that image tagging can be a pretty useful case. It doesn't have to be perfect


just be a lot cheaper and more universal than it would to have humans doing it.

Some of what we're doing now, both on the part of implementers and on the R&D people working on the core technologies, is understanding what the fundamental roadblocks are, and quantifying strengths and weaknesses. That's part of the process for anything you do. I can see an argument that more-limited resources should be put on implementation, but a company is going to have to go out and try something and then say "okay, this is what does and doesn't work for us" in order to know what to require in the next iteration. And that's not new. Take, oh, the Macintosh. Apple tried to put out the Lisa. It wasn't a market success. But taking what did work and correcting what didn't was a lot of what led to the Macintosh, which was a much larger success and closer to what the market wanted. It's going to be an iterative process.

I also think that some of that is laying the groundwork for more-sophisticated AI systems to be dropped in. Like, if you think of, say, an LLM now as a placeholder for a more-sophisticated system down the line, the interfaces are being built into other software to make use of more-sophisticated systems. You just change out the backend. So some of that is going to be positioning not just for the current crop, but tomorrow's crop of systems.

If you remember the Web around the late 1990s, the companies that did have websites were often pretty amateurish-looking. They were often not very useful. The teams that made them didn't have a lot of resources. The tools to work with websites were still limited, and best practices not developed.

https://www.webdesignmuseum.org/gallery/year-1997

But what they did was get a website up, start people using them, and start building the infrastructure for what, some years later, was a much-more-important part of the company's interface and operations.

I think that that's where we are now regarding use of AI. Some people are doing things that won't wind up ultimately working (e.g. the way Web portals never really took over, for the Web). Some important things, like widespread encryption, weren't yet deployed. The languages and toolkits for doing development didn't really yet exist. Stuff like Web search, which today is a lot more approachable and something that we simply consider pretty fundamental to use of the Web, wasn't all that great. If you looked at the Web in 1997, it had a lot of deficiencies compared to brick-and-mortar companies. But...that also wasn't where things stayed.

Today, we're making dramatic changes to how models work, like the rise of MoEs. I don't think that there's much of a consensus on what hardware we'll wind up using. Training is computationally expensive. Just using models on a computer yourself still involves a fair amount of technical knowledge, the sort of way the MS-DOS era on personal computers prevented a lot of people from being able to do a lot with computers. There are efficiency issues, and basic techniques for doing things like condensing knowledge are still being developed. LLMs people are building today have very little "mutable" memory


you're taking a snapshot of information at training time and making something that can do very little learning at runtime. But if I had to make a guess, a lot of those things will be worked out.

I am pretty bullish on AI in the long term. I think that we're going to figure out general intelligence, and make things that can increasingly do human-level things. I don't think that that's going to be a hundred years in the future. I think that it'll be sooner.

But I don't know whether any one company doing something today is going to be a massive success, especially in the next, say, five years. I don't know whether we will fundamentally change some of the approaches we used. We worked on self-driving cars for a long time. I remember watching video of early self-driving cars in the mid-1980s. It's 2026 now. That was a long time. I can get in a robotaxi and be taken down the freeway and around my metro area. It's still not a complete drop-in replacement for human drivers. But we're getting pretty close to being able to use the things in most of the same ways that we do human drivers. If you'd have asked me in 2000 whether we would make self-driving cars, I would say basically what I do about advanced AI today


I'm quite bullish on the long-term outcome, but I couldn't tell you exactly when it'll happen. And I think that that advanced AI will be extremely impactful.

Summarizing text


probably not primarily books


is one area that I think might be more useful. It is a task that many people do spend time doing. Maybe it's combining multiple reports from subordinates, say, and then pushing a summary upwards.

The problem I have with summarizing text is that it does often miss key features. Without using books as an example, for my work we have a knowledge base that we reference for things. We work in all 50 states, and the laws vary, and the AI will very frequently quote the wrong state's laws, or tell us to do something possible in one state, but not in others. Could this get better? Maybe, but I'm not super convinced.

The rest of the comment isn't exactly disagreeable, I'm just also concerned of the social costs. Not just for things like lost jobs, those always happen when new things come in. It sucks, but we do move on, and entire professions have been forgotten because they were automated long ago. A lot of the opinions I have about AI are a bit reactionary, but at the same time headlines like "AI chatbot talks child into suicide, and it's really easy to get it to do that" is. Y'know. Not a great thing to read, especially when the tech is steeped in controversy in all directions. Copyright (which isn't an issue they'll ever get past without massive changes, and scrapping entire models), bringing smaller sites down with extensive scraping, job loss, environmental concerns (however overblown they may or may not be), increasing utility bills for areas, leading to the RAM shortage... It's a whole lot of bad stuff, all for something that, largely, people don't want, and is being forced into every aspect of our daily lives.

All this for something that people largely don't want. I don't even remember this many people being this anti-internet/computers. At worst I remember articles talking about how it'd be a passing fad. Granted I was a kid when the internet was really kicking off, but I was in an area where people were still mad about seatbelts, so I'd imagine at least a handful would've hated the internet if it were even half as bad as how little AI is wanted anywhere outside of CEO offices.

I'm sure AI will find some use-cases, I just don't think they're going to be user-facing at all, mostly due to how much they cost vs how much people will be willing to pay.