this post was submitted on 21 May 2025
42 points (100.0% liked)

TechTakes

1870 readers
158 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] Irelephant@lemm.ee 9 points 14 hours ago

Came here to post this, funnily enough.

We're poisoning people's air for this.

[–] ReversalHatchery@beehaw.org 12 points 15 hours ago

just look at it. it is not enough that AI is boiling the planet, but with every iteration of copilot, all those automatic checks are reran! on the first mentioned PR, the checks have been running for 20 minutes when I'm reading it, and there's like a dozen of them!

other projects have to pay for processing time on github actions!!

this is insanity

[–] o7___o7@awful.systems 3 points 11 hours ago

An image of a Github-themed restaurant that serves poop burgers.

[–] swlabr@awful.systems 21 points 18 hours ago (1 children)

you all joke, but my mind is so expanded by stimulants that I, and only I, can see how this dogshit code will one day purchase all the car manufacturers and build murderbots

[–] Soyweiser@awful.systems 7 points 16 hours ago (1 children)

Look, im def on team Murderbot, but when ~~we~~ the AI's start building them I really hope Martha Wells gets some kickbacks at least.

[–] o7___o7@awful.systems 6 points 13 hours ago* (last edited 13 hours ago)

I love how Wells has given us both a great series of stories AND a jokey terminator analog to diffuse the mAnLy trope of building and/or fighting terminators.

[–] Soyweiser@awful.systems 16 points 18 hours ago* (last edited 17 hours ago)

No real understanding of what it's doing, it's just guessing.

Are they talking about the LLMs or the people who think just chatting with the LLM will fix it? :)

E: from a comment about this on hackernews:

Funniest PRs are the ones that "resolves" test failures by removing/commenting out the test cases, or change the assertions.

Perfect, no notes. Ship

[–] sailor_sega_saturn@awful.systems 23 points 19 hours ago* (last edited 19 hours ago) (1 children)

Ah yes the typical workflow for LLM generated changes:

  1. LLM produces nonsense at the behest of employee A.
  2. Employee B leaves a bunch of edits and suggestions to hammer it into something that's sloppy but almost kind of makes sense. A soul-sucking error prone process that takes twice as long as just writing the dang code.
  3. Code submitted!
  4. Employee A gets promoted.

Also the fact that this isn't integrated with tests shows how rushed the implementation was. Not even LLM optimists should want code changes that don't compile or that break tests.

[–] wjs018@piefed.social 12 points 19 hours ago* (last edited 19 hours ago) (1 children)

I just looked at the first PR out of curiosity, and wow...

this isn't integrated with tests

That's the part that surprised me the most. It failed the existing automation. Even after prompted to fix the failing tests, it proudly added a commit "fixing" it (it still didn't pass...something that copilot should really be able to check). Then the dev had to step in and say why the test was failing and how to fix the code to make it pass. With this much handholding all of this could have been done much faster and cleaner without any AI involvement at all.

[–] zbyte64@awful.systems 11 points 19 hours ago

The point is to get open source maintainers to further train their program because they already scraped all our code. I wonder if this will become a larger trend among corporate owned open source projects.

[–] swlabr@awful.systems 7 points 18 hours ago

Someone should write a script that estimates how much time has been spent re-fondling LLMPRs on Github.

[–] Kowowow@lemmy.ca 5 points 19 hours ago (4 children)

Is there a reason why that ai "evolution" thing don't work for code? In theory shouldn't it be decent at least

[–] V0ldek@awful.systems 2 points 47 minutes ago* (last edited 47 minutes ago)

For LLMs specifically? Code is not text, aside from the most clinical, dictionary definition of "text".

But even then, it also fails at writing coherent short or longform, so even if code was "just text" it'd fail equally badly.

[–] scruiser@awful.systems 8 points 16 hours ago

To elaborate on the other answers about alphaevolve. the LLM portion is only a component of alphaevolve, the LLM is the generator of random mutations in the evolutionary process. The LLM promoters like to emphasize the involvement of LLMs, but separate from the evolutionary algorithm guiding the process through repeated generations, LLM is as likely to write good code as a dose of radiation is likely to spontaneously mutate you to be able to breathe underwater.

And the evolutionary aspect requires a lot of compute, they don't specify in their whitepaper how big their population is or the number of generations, but it might be hundreds or thousands of attempted solutions repeated for dozens or hundreds of generations, so that means you are running the LLM for thousands or tens of thousands of attempted solutions and testing that code against the evaluation function everytime to generate one piece of optimized code. This isn't an approach that is remotely affordable or even feasible for software development, even if you reworked your entire software development process to something like test driven development on steroids in order to try to write enough tests to use them in the evaluation function (and you would probably get stuck on this step, because it outright isn't possible for most practical real world software).

Alphaevolve's successes are all very specific very well defined and constrained problems, finding specific algorithms as opposed to general software development

[–] o7___o7@awful.systems 9 points 18 hours ago* (last edited 18 hours ago)

zbyte64 gave a great answer. I visualize it like this:

Writing software that does a thing correctly within well defined time and space constraints is nothing like climbing a smooth gradient to a cozy global maximum.

On a good day, it's like hopping on a pogo stick around a spiky, discontinuous, weirdly-connected n-dimensional manifold filled with landmines (for large values of n).

The landmines don't just explode. Sometimes they have unpredictable comedic effects, such as ruining your weekend two months from now.

Evolution is simply the wrong tool for the job.

[–] zbyte64@awful.systems 7 points 19 hours ago (1 children)

Talking about Alpha Evolve https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/ ?

First, Microsoft isn't using this yet but even if they were it doesn't work in this context. What Google did was they wrote a fitness function to tune the Generative process. Why not have some rubric that scores the code as our fitness function? Because the function needs to be continuous for this to work well, no sudden cliffs. But also they didn't address how this would work in a multi-objective space, this technique doesn't let the LLM make reasonable trade offs between complexity and speed.

[–] Kowowow@lemmy.ca 4 points 18 hours ago

I forgot about alpha evolve, with all the flashy titles about I figured it wasn't a big deal, I was more talking about the low level stuff I guess like "ai learns to play mario/walk" but I imagine it follows the same logic the other comment talks about