this post was submitted on 07 Apr 2026
431 points (98.9% liked)

Fuck AI

6731 readers
920 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
top 38 comments
sorted by: hot top controversial new old
[–] renzhexiangjiao@piefed.blahaj.zone 151 points 1 week ago (2 children)
[–] Madrigal@lemmy.world 85 points 1 week ago (1 children)

I’ve literally seen someone include “Don’t hallucinate” in an agent’s instructions.

[–] rozodru@piefed.world 37 points 1 week ago (1 children)

Asking Claude to not hallucinate is like telling a person to not breathe. it's gonna happen, and happen conistently.

[–] FrederikNJS@piefed.zip 48 points 1 week ago (1 children)

I think the important bit to understand here is that LLMs are never not hallucinating. But they sometimes happens to hallucinate something correct.

[–] Kirk@startrek.website 32 points 1 week ago

This fact of how LLMs work is not at all widespread enough IMO.

[–] driving_crooner@lemmy.eco.br 25 points 1 week ago

"Include no bugs"

[–] Ibuthyr@feddit.org 107 points 1 week ago (3 children)

Writing all these prompts almost seems like a more time-consuming thing than actually programming the software.

[–] sundray@lemmus.org 34 points 1 week ago

Absolutely true, but executives kind of understand prompts whereas they don’t understand programming at all.

[–] jtrek@startrek.website 28 points 1 week ago (1 children)

100%

At work, this week, what should have been a 30 minute task is taking all week because of process slog. Adding AI won't make it any faster. It would make it slower, because of the time writing the prompts and checking its output.

Management isn't really interested in fixing their process or training their workers. But they're really excited about ai

[–] chocrates@piefed.world 26 points 1 week ago (1 children)

They are excited that they can learn a tool that uses English to write their business logic. It's not about AI making it easier for technical folks, it's about to eventually getting rid of technical folks entirely. Or as much as they can feasibly get away with.

[–] jtrek@startrek.website 21 points 1 week ago (1 children)

Right. Ownership doesn't want to pay for labor. They want to keep all the money for themselves.

Which makes it funny (in a sad way) when all these tech folks, who are labor, are super on board with this whole thing. You're digging your own grave.

[–] chocrates@piefed.world 8 points 1 week ago (1 children)

I'm starting to learn it deeper. I hate it. I don't have a career if programming goes away though so I guess I'm making a deal with the devil while I try to find an exit strategy.

[–] JcbAzPx@lemmy.world 10 points 1 week ago (1 children)

You don't have to worry long term. The only issue is how hard your boss falls for the snake oil sales pitch.

[–] chocrates@piefed.world 4 points 1 week ago (1 children)

I don't think LLM's are going away. OpenAI will die, Claude will jack up their prices to match their cost, but the technology isn't going away. At least until the next iteration shows up.

[–] JcbAzPx@lemmy.world 2 points 1 week ago

What's also not going away is the truth of their actual abilities. The only people who really have to worry are the ones in the entertainment industry.

[–] darklamer@feddit.org 5 points 1 week ago

The great prof. dr. Edsger W. Dijkstra wrote exactly that already in his 1978 essay On the foolishness of "natural language programming":

https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667.html

[–] volore@scribe.disroot.org 71 points 1 week ago* (last edited 1 week ago)
[–] puchaczyk@lemmy.world 70 points 1 week ago (1 children)

So the innovation in Claude was to write 95% of the prompt for the user and make you use like 10k tokens

[–] floquant@lemmy.dbzer0.com 11 points 1 week ago (1 children)

The problem is that words don't have meaning in the genAI field. Everything is an agent now. So it's difficult and confusing to compare strategies and performance.

Claude Code is a pretty solid harness. And a harness is indeed just prompts and tools.

[–] JackbyDev@programming.dev 8 points 1 week ago

✨agent✨

Sort of like how everything is an "app" now.

[–] Hamartiogonic@sopuli.xyz 56 points 1 week ago (1 children)

Just write good code. It’s as simple as that, right?

[–] volore@scribe.disroot.org 58 points 1 week ago* (last edited 1 week ago) (1 children)

>adds "don't be evil" to system prompt

GUYS I SOLVED THE ALIGNMENT PROBLEM! We're saved from evil AI!

[–] fargeol@lemmy.world 50 points 1 week ago
[–] one_old_coder@piefed.social 35 points 1 week ago

They are spending thousands of dollars in tokens and write the most complicated prompts in order to avoid writing good specifications.

[–] umbraroze@slrpnk.net 21 points 1 week ago (1 children)

"Don't put in any of the Top 10 vulnerabilities. But if you put any from the 11th place and down, that's okay, I don't even know what those are."

(Also, getting flashbacks from Shadiversity plugging "ugly art" and "bad anatomy" in the negative prompt as he was no doubt silently wondering why it didn't work)

[–] SlurpingPus@lemmy.world 10 points 1 week ago

“In other news, popularity of attacks against OWASP vulnerabilities #11-20 rose sharply.”

[–] yetAnotherUser@discuss.tchncs.de 15 points 1 week ago (1 children)

That may actually work a little?

I mean, it scraped the entirety of StackOverflow. If someone answered with insecure code, it's statistically likely people mentioned it in the replies meaning the token "This is insecure" (or similar) should be close to (known!!) insecure code.

[–] addie@feddit.uk 18 points 1 week ago

I was part of that OWASP Application Security Verification Standards compliance at my work. At a high level, you choose a compliance level that suitable for the environment you expect your app to be deployed in, and then there's a hundred pages of 'boxes to tick'. (Download here.)

Some of them are literal 'boxes to tick' - do you do logging in the proscribed way? - but a lot of it is:

  • do you follow the standard industry protocols for doing this thing?
  • can you prove that you do so, and have protocols in place to keep it that way?

Not many of them are difficult, but there's a lot of them. I'd say that's typical of security hardening; the difficulty is in the number of things to keep track of, not really any individual thing.

As regards the 'have you used this thing in the correct, secure way?', I'd point my finger at something like Bouncy Castle as a troublemaker, although it's far from alone. It's the Java standard crypto library, so you think there would be a lot of examples showing the correct way to use it, and make sure that you're aware of any gotchas? Hah hah fat chance. Stack Overflow has a lot of examples, a lot of them are bad, and a lot of them might have been okay once but are very outdated. I would prefer one absolutely correct example than a hundred examples have argued over, especially people that don't necessarily know any better. And it's easy to be 'convincing but wrong', and LLMs are really bad in that case. So 'ticking the box' to say that you're using it correctly is extremely difficult.

I see the Claude prompt is 'OWASP top 10', not 'the full OWASP compliance doc', which would probably set all your tokens on fire. But it's what's needed - the most slender crack in security can be enough to render everything useless.

[–] arcine@jlai.lu 14 points 1 week ago

Oh boy, if there's an OWASP top 11th vulnerability, we're cooked /j

[–] Damage@feddit.it 12 points 1 week ago

"Claude, add to this prompt all the instructions necessary to stop you from making mistakes or writing insecure code"

[–] lath@lemmy.world 11 points 1 week ago

That's a what if, just in case it gains sentience. Gotta make sure we get good code even as it enslaves or extinguishes us.

[–] JackbyDev@programming.dev 10 points 1 week ago

I sort of get the need to do this, but it's so silly to be. Reminds me of how giving Stable Diffusion negative prompts for "bad" and "low quality" would give you better results.

[–] melsaskca@lemmy.ca 8 points 1 week ago

Programming is the use of logic and reasoning. There will always be a use for that. Even without tech.

[–] 8oow3291d@feddit.dk 2 points 1 week ago (1 children)

So I don't know if all the other replies are pretending to be stupid, but the shown prompt is not stupid.

If you include stuff like that section in your prompt, then it has been shown that the AI will be more likely to output secure code. Hence of course the section should be included in the prompt.

If it looks stupid but it works, then it is not stupid.

[–] Chais@sh.itjust.works 18 points 1 week ago (1 children)

Firstly, it can work and still be stupid.
Secondly, since the chat bot is more likely but not certain to write secure, bug-free code, it does not in fact work and is therefore, by your own reasoning, stupid.
But so is asking a chat bot for code to begin with, so there wasn't ever really a way around that.