this post was submitted on 21 May 2025
44 points (100.0% liked)

TechTakes

1870 readers
134 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] sailor_sega_saturn@awful.systems 23 points 1 day ago* (last edited 1 day ago) (1 children)

Ah yes the typical workflow for LLM generated changes:

  1. LLM produces nonsense at the behest of employee A.
  2. Employee B leaves a bunch of edits and suggestions to hammer it into something that's sloppy but almost kind of makes sense. A soul-sucking error prone process that takes twice as long as just writing the dang code.
  3. Code submitted!
  4. Employee A gets promoted.

Also the fact that this isn't integrated with tests shows how rushed the implementation was. Not even LLM optimists should want code changes that don't compile or that break tests.

[–] wjs018@piefed.social 13 points 1 day ago* (last edited 1 day ago) (1 children)

I just looked at the first PR out of curiosity, and wow...

this isn't integrated with tests

That's the part that surprised me the most. It failed the existing automation. Even after prompted to fix the failing tests, it proudly added a commit "fixing" it (it still didn't pass...something that copilot should really be able to check). Then the dev had to step in and say why the test was failing and how to fix the code to make it pass. With this much handholding all of this could have been done much faster and cleaner without any AI involvement at all.

[–] zbyte64@awful.systems 11 points 1 day ago

The point is to get open source maintainers to further train their program because they already scraped all our code. I wonder if this will become a larger trend among corporate owned open source projects.