this post was submitted on 17 Dec 2024
3 points (100.0% liked)

TechTakes

1873 readers
254 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mii@awful.systems 0 points 5 months ago (1 children)

Let's be real here: when people hear the word AI or LLM they don't think of any of the applications of ML that you might slap the label "potentially useful" on (notwithstanding the fact that many of them also are in a all-that-glitters-is-not-gold--kinda situation). The first thing that comes to mind for almost everyone is shitty autoplag like ChatGPT which is also what the author explicitly mentions.

[–] 9point6@lemmy.world 0 points 5 months ago (6 children)

I'm saying ChatGPT is not useless.

I'm a senior software engineer and I make use of it several times a week either directly or via things built on top of it. Yes you can't trust it will be perfect, but I can't trust a junior engineer to be perfect either—code review is something I've done long before AI and will continue to do long into the future.

I empirically work quicker with it than without and the engineers I know who are still avoiding it work noticeably slower. If it was useless this would not be the case.

[–] froztbyte@awful.systems 2 points 5 months ago* (last edited 5 months ago) (18 children)

I’m a senior software engineer

ah, a señor software engineer. excusé-moi monsoir, let me back up and try once more to respect your opinion

uh, wait:

but I can’t trust a junior engineer to be perfect either

whoops no, sorry, can't do it.

jesus fuck I hope the poor bastards that are under you find some other place real soon, you sound like a godawful leader

and the engineers I know who are still avoiding it work noticeably slower

yep yep! as we all know, velocity is all that matters! crank that handle, produce those features! the factory must flow!!

fucking christ almighty. step away from the keyboard. go become a logger instead. your opinions (and/or the shit you're saying) is a big part of everything that's wrong with industry.

[–] raspberriesareyummy@lemmy.world 1 points 5 months ago (1 children)

Thank you for saving me the breath to shit on that person's attitude :)

[–] froztbyte@awful.systems 1 points 5 months ago (1 children)

yw

these arseslugs are so fucking tedious, and for almost 2 decades they've been dragging everything and everyone around them down to their level instead of finding some spine and getting better

[–] raspberriesareyummy@lemmy.world 1 points 5 months ago (1 children)

word. When I hear someone say "I'm a SW developer and LLM xy helps me in my work" I always have to stop myself from being socially unacceptably open about my thoughts on their skillset.

[–] froztbyte@awful.systems 1 points 5 months ago

and that’s the pernicious bit: it’s not just their skillset, it also goes right to their fucking respect for their team. “I don’t care about just barfing some shit into the codebase, and I don’t think my team will mind either!”

utter goddamn clownery

[–] froztbyte@awful.systems 1 points 5 months ago* (last edited 5 months ago) (1 children)

and the engineers I know who are still avoiding it work noticeably slower

yep yep! as we all know, velocity is all that matters! crank that handle, produce those features! the factory must flow!!

and you fucking know what? it's not even just me being a snide motherfucker, this rant is literally fucking supported by data:

The survey found that 75.9% of respondents (of roughly 3,000* people surveyed) are relying on AI for at least part of their job responsibilities, with code writing, summarizing information, code explanation, code optimization, and documentation taking the top five types of tasks that rely on AI assistance. Furthermore, 75% of respondents reported productivity gains from using AI.

...

As we just discussed in the above findings, roughly 75% of people report using AI as part of their jobs and report that AI makes them more productive.

And yet, in this same survey we get these findings:

if AI adoption increases by 25%, time spent doing valuable work is estimated to decrease 2.6% if AI adoption increases by 25%, estimated throughput delivery is expected to decrease by 1.5% if AI adoption increases by 25%, estimated delivery stability is expected to decrease by 7.2%

and that's a report sponsored and managed right from the fucking lying cloud company, no less. a report they sponsor, run, manage, and publish is openly admitting this shit. that is how much this shit doesn't fucking work the way you sell it to be doing.

but no, we should trust your driveby bullshit. motherfucker.

[–] knightly@pawb.social 0 points 5 months ago (2 children)

Lol, using a survey to try and claim that your argument is "supported by data".

Of course the people who use Big Autocorrect think it's useful, they're still using it. You've produced a tautology and haven't even noticed. XD

[–] dgerard@awful.systems 1 points 5 months ago

christ, did someone fire up the Batpromptfondler signal

[–] froztbyte@awful.systems 1 points 5 months ago (1 children)

it may be a shock to learn this, but asking people things is how you find things out from them

I know it requires speaking to humans, alas, c’est la vie

[–] knightly@pawb.social 0 points 5 months ago (1 children)

It may be a shock to learn this, but asking people things is how you find out what they think, not what is true.

I know proof requires more than just speaking to humans, alas, c'est la vie.

[–] froztbyte@awful.systems 1 points 5 months ago (1 children)

did you know the report also publishes the details of its analysis methodology?

my god, where are you people coming from today

[–] knightly@pawb.social 0 points 5 months ago* (last edited 5 months ago) (2 children)

Did you know that all reputable surveys publish their methodology?

Did you know that, regardless of how you analyze the results, a survey is still just a survey?

If LLMs were worth the hype then you'd have actual proof of utility, not just sentiment.

[–] froztbyte@awful.systems 1 points 5 months ago

If LLMs were worth the hype then you’d have actual proof of utility

you think I'm promptfan-posting? impressive.

[–] MBM@lemmings.world 1 points 5 months ago (1 children)

This is a pretty funny interaction when you realise that you just misread the froztbyte's self-reply (and the survey) as pro-AI, so you were just aggressively agreeing with each other all along

[–] froztbyte@awful.systems 1 points 5 months ago* (last edited 5 months ago)

(why I was not as harsh as in earlier comments)

[–] swlabr@awful.systems 1 points 5 months ago (1 children)

Please, señor software engineer was my father. Call me Bob.

load more comments (15 replies)
[–] 000@lemmy.dbzer0.com 1 points 5 months ago

Oh my god, an actual senior softeare engineer????? Amidst all of us mortals??

[–] BlueMonday1984@awful.systems 1 points 5 months ago

I’m a senior software engineer

Good. Thanks for telling us your opinion's worthless.

[–] mii@awful.systems 1 points 5 months ago* (last edited 5 months ago) (1 children)

I’m a senior software engineer

Nice, me too, and whenever some tech-brained C-suite bozo tries to mansplain to me why LLMs will make me more efficient, I smile, nod politely, and move on, because at this point I don't think I can make the case that pasting AI slop into prod is objectively a worse idea than pasting Stack Overflow answers into prod.

At the end of the day, if I want to insert a snippet (which I don't have to double-check, mind you), auto-format my code, or organize my imports, which are all things I might use ChatGPT for if I didn't mind all the other baggage that comes along with it, Emacs (or Vim, if you swing that way) does this just fine and has done so for over 20 years.

I empirically work quicker with it than without and the engineers I know who are still avoiding it work noticeably slower.

If LOC/min or a similar metric is used to measure efficiency at your company, I am genuinely sorry.

[–] 9point6@lemmy.world 0 points 5 months ago (2 children)

I agree with you on the examples listed, there are much better tools than an LLM for that. And I agree no one should be copy and pasting without consideration, that's a misuse of these tools.

I'd say my main uses are kicking off a new test suite (obviously you need to go and check the assertions are what you expect, but it's usually about 95% there) which has gone from a decent percentage of the work for a feature down to an almost negligible amount of time. This one also results in me enjoying my job a bit more now too as I've always found writing tests a bit of a drudgery.

The other big use for me is that my organisation is pretty big and has a hefty amount of code (a good couple of thousand repos at least), we have a tool that's based on GPT which has processed all the code, so you can now ask queries about internal stuff that may not be well documented or particularly obvious. This one saves a load of time because I now don't always have to do the Slack merry go round to try and find an engineer that knows about what I'm looking for—sometimes it's still unavoidable, but they're less frequent moments now.

If LOC/min or a similar metric is used to measure efficiency at your company, I am genuinely sorry.

It's tied to OKR completion, which is generally based around delivery. If you deliver more feature work, it generally means your team's scores will be higher and assuming your manager is aware of your contributions, that translates to a bigger bonus. It's more of a carrot than a stick situation IMO, I could work less hard if I didn't want the extra money.

[–] sinedpick@awful.systems 1 points 5 months ago

I worked at one of the biggest AI companies and their internal AI question/answer was dogshit for anything that could be answered by someone with a single fold in their brain. Maybe your co has a much better one, but like most others, I'm gonna go with the smooth brain hypothesis here.

[–] self@awful.systems 1 points 5 months ago (1 children)

It’s tied to OKR completion, which is generally based around delivery. If you deliver more feature work, it generally means your team’s scores will be higher and assuming your manager is aware of your contributions, that translates to a bigger bonus.

holy fuck. you’re so FAANG-brained I’m willing to bet you dream about sending junior engineers to the fulfillment warehouse to break their backs

motherfucking, “i unironically love OKRs and slurping raises out of management if they notice I’ve been sleeping under my desk again to get features in” do they make guys like you in a factory? does meeting fucking normal software engineers always end like it did in this thread? will you ever realize how fucking embarrassing it is to throw around your job title like this? you depressing little fucker.

[–] swlabr@awful.systems 1 points 5 months ago* (last edited 5 months ago)

gilding the lily a bit but

[–] sailor_sega_saturn@awful.systems 1 points 5 months ago* (last edited 5 months ago) (1 children)

~~Senior software engineer~~ programmer here. I have had to tell coworkers "don't trust anything chat-gpt tells you about text encoding" after it made something up about text encoding.

[–] froztbyte@awful.systems 1 points 5 months ago (1 children)

ah but did you tell them in CP437 or something fancy (like any text encoding after 1996)? 🤨🤨🥹

[–] sailor_sega_saturn@awful.systems 1 points 5 months ago

Sadly all my best text encoding stories would make me identifiable to coworkers so I can't share them here. Because there's been some funny stuff over the years. Wait where did I go wrong that I have multiple text encoding stories?

That said I mostly just deal with normal stuff like UTF-8, UTF-16, Latin1, and ASCII.

[–] pipes@sh.itjust.works 0 points 5 months ago (2 children)

In this and other use cases I call it a pretty effective search engine, instead of scrolling through stackexchange after clicking between google ads, you get the cleaned up example code you needed. Not a Chat with any intelligence though.

[–] froztbyte@awful.systems 1 points 5 months ago (1 children)

"despite the many people who have shown time and time and time again that it definitely does not do fine detail well and will often present shit that just 10000% was not in the source material, I still believe that it is right all the time and gives me perfectly clean code. it is them, not I, that are the rubes"

[–] Soyweiser@awful.systems 1 points 5 months ago (1 children)

The problem with stuff like this is not knowing when you dont know. People who had not read the books SSC Scott was reviewing didnt know he had missed the points (or not read the book at all) till people pointed it out in the comments. But the reviews stay up.

Anyway this stuff always feels like a huge motte bailey, where we go from 'it has some uses' to 'it has some uses if you are a domain expert who checks the output diligently' back to 'some general use'.

[–] V0ldek@awful.systems 1 points 5 months ago (1 children)

A lot of the "I'm a senior engineer and it's useful" people seem to just assume that they're just so fucking good that they'll obviously know when the machine lies to them so it's fine. Which is one, hubris, two, why the fuck are you even using it then if you already have to be omniscient to verify the output??

[–] blakestacey@awful.systems 1 points 5 months ago

"If you don't know the subject, you can't tell if the summary is good" is a basic lesson that so many people refuse to learn.

[–] Amoeba_Girl@awful.systems 1 points 5 months ago* (last edited 5 months ago) (1 children)

That ChatGPT can be more useful than a web search is really more indicative of how bad the web has got, and can only get worse as fake text invades it. It's not actually better than a functional search engine and a functional web, but the companies making these things have no interest in the web being usable. Pretty depressing.

[–] sailor_sega_saturn@awful.systems 1 points 5 months ago

Remember when you could read through all the search results on Google rather than being limited to the first hundred or so results like today? And boolean search operators actually worked and weren't hidden away behind a "beware of leopard" sign? Pepperidge Farm remembers.