this post was submitted on 17 Mar 2026
414 points (98.4% liked)

Programming

26102 readers
652 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

Excerpt:

"Even within the coding, it's not working well," said Smiley. "I'll give you an example. Code can look right and pass the unit tests and still be wrong. The way you measure that is typically in benchmark tests. So a lot of these companies haven't engaged in a proper feedback loop to see what the impact of AI coding is on the outcomes they care about. Lines of code, number of [pull requests], these are liabilities. These are not measures of engineering excellence."

Measures of engineering excellence, said Smiley, include metrics like deployment frequency, lead time to production, change failure rate, mean time to restore, and incident severity. And we need a new set of metrics, he insists, to measure how AI affects engineering performance.

"We don't know what those are yet," he said.

One metric that might be helpful, he said, is measuring tokens burned to get to an approved pull request – a formally accepted change in software. That's the kind of thing that needs to be assessed to determine whether AI helps an organization's engineering practice.

To underscore the consequences of not having that kind of data, Smiley pointed to a recent attempt to rewrite SQLite in Rust using AI.

"It passed all the unit tests, the shape of the code looks right," he said. It's 3.7x more lines of code that performs 2,000 times worse than the actual SQLite. Two thousand times worse for a database is a non-viable product. It's a dumpster fire. Throw it away. All that money you spent on it is worthless."

All the optimism about using AI for coding, Smiley argues, comes from measuring the wrong things.

"Coding works if you measure lines of code and pull requests," he said. "Coding does not work if you measure quality and team performance. There's no evidence to suggest that that's moving in a positive direction."

you are viewing a single comment's thread
view the rest of the comments
[–] saltesc@lemmy.world 12 points 9 hours ago* (last edited 9 hours ago) (2 children)

We'll be in this state until actually intelligent AI comes along. Some evolution of machine learning beyond LLMs.

Yep. The methodology of LLMs is effectively an evolution of Markov chains. If someone hadn't recently change the definition of AI to include "the illusion of intelligence" we wouldn't be calling this AI. It's just algorithmic with a few extra steps to try keep the algorithm on-topic.

These types.of things, we have all the time in generative algorithms. I think LLMs being more publicly seen is why someone started calling it AI now.

So we've basically hit the ceiling straight out of the gate and progress is not quicker or slower. We'll have another step forward in predictive algorithms in the future, but not now. It's usually a once a decade thing and varies in advancement.

[–] OpenStars@piefed.social 1 points 1 hour ago

People have been trying to call things "AI" for at least the last half century (with varying degrees of success). They were chomping at the bit for this before most of us here were even alive.

We are at end-stage capitalism and things other than scientific discoveries and technological engineering marvels are driving the show now. Money is made regardless of reality, and cultural shifts follow the money. Case in point: we too here are calling this "AI".

[–] Jesus_666@lemmy.world 2 points 6 hours ago

Of course LISP machines didn't crash the hardware market and make up 50 % of the entire economy. Other than that it's, as Shirley Bassey put it, all just a little bit of history repeating.