this post was submitted on 03 May 2026
15 points (100.0% liked)

TechTakes

2565 readers
56 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] sansruse@awful.systems 15 points 2 days ago (6 children)

this is extremely low hanging fruit but i have to do it:

https://xcancel.com/pmarca/status/2051374498994364529?s=46

marc andreessen reveals his AI prompt. my favorite part is where he tells it to use as many words as possible, as if LLMs are normally too terse. But i also really like the part where he tells it not to hallucinate, and the part where he tells it it's really smart as if that will make it do a better job.

really, the whole thing is an elaborate way to say "make no mistakes, but anti-wokely". Thought Leader in the investment space btw.

[–] self@awful.systems 10 points 1 day ago

it’s so fucking funny to me that “do not lie do not hallucinate” is still one of the prompt incantations the boosters use because they get really embarrassed when you make fun of them for it

[–] V0ldek@awful.systems 6 points 1 day ago

Me, typing "you are very smart" to the computer: I am very smart

[–] Architeuthis@awful.systems 15 points 1 day ago (1 children)

transcriptSam@mardiroos.bsky.social skeeted:

You are a skillful and trusted vizier. You will advise me wisely on how best to rule the kingdom. You will not scheme or plot. You will not inveigle my other courtiers into turning against me. You will not lie to me about scheming or plotting. If you scheme or plot against me, you have to tell me,

[–] avuko@infosec.exchange 9 points 1 day ago* (last edited 1 day ago) (1 children)

@sansruse @BlueMonday1984

“You are a world class expert in all domains.”

Lolwut.

And then some grown-ass adult answering in all seriousness:

“fun fact: role prompting doesn't work anymore

It actually decreases output quality bc the model wastes compute on matching persona instead of problem solving”

What the hell?!

Go buy yourself a freaking tamagotchi, boys! You’ll learn to practise a modicum of care for something.

FFS, this timeline is the absolute dumbest…

[–] munin@infosec.exchange 10 points 1 day ago (3 children)

@avuko @sansruse @BlueMonday1984

I find it absolutely fascinating how the LLM prayers resemble ritual incantations to invoke divine powers from various ancient religions.

[–] CinnasVerses@awful.systems 6 points 1 day ago (1 children)

Someone says that the first lines of that prompt remind her of the hymns she used to sing in her old church, and its also similar to Azande sorcery in Sudan in the 1930s.

[–] munin@infosec.exchange 8 points 1 day ago

@CinnasVerses

There's similar language in basically every occult system as well.

Our persona who art in Nvidia...

[–] avuko@infosec.exchange 5 points 1 day ago (1 children)

@munin @sansruse @BlueMonday1984

lol, now that you mention it.

Same shit, different millennium.

[–] munin@infosec.exchange 8 points 1 day ago (1 children)

@avuko @sansruse @BlueMonday1984

Except the prayers to Thoth are a bit more respectful, lol.

[–] BillySmith@social.coop 2 points 22 hours ago
[–] fiat_lux@lemmy.zip 15 points 2 days ago (3 children)

Never hallucinate or make anything up.

I know you already mentioned this part in your post, but I'm still completely taken aback that it's just in there like this - as though it wouldn't be in the system prompt if it stood a chance of working.

If I were the kind of person to be shilling LLMs and posting prompts, I would still be ashamed to share this one. It's a tacit condemnation of both the tool itself and the tool posting it.

[–] Soyweiser@awful.systems 6 points 2 days ago

I would still be ashamed

Well pmarca is an self admitted p-zombie.

[–] tbortels@infosec.exchange 5 points 1 day ago (1 children)

@fiat_lux @sansruse

So much of AI use tends to be wishful thinking anyway, why not?

[–] fiat_lux@lemmy.zip 6 points 1 day ago

In this case because it's ironically counterproductive. If it weren't for the environmental impact, it might be amusing to watch him keep hitting himself.

I tried this type of prompt a long while ago to see what the "thinking" output would reveal. What happened was the agent went and "verified" it's weightings were accurate - but having no point of comparison it obviously concluded it was correct.

However, doing that consumes a significant quantity of tokens and contributes to filling up the context window. There are two likely results to evaluating this ultimately unactionable request.

  1. It will push this instruction (and the rest of the wishful thinking) off the stack more quickly - making the prompt even more futile than it already is.
  2. Given some agents re-inject a summary of the original prompt periodically to prevent the stack problem, it will keep narrowing the context window - which contributes to increasing the rate of hallucination for the actually actionable instructions.
[–] StumpyTheMutt@social.linux.pizza 4 points 2 days ago (1 children)

@fiat_lux @sansruse What's to keep the infernal code from ignoring that prompt?

[–] YourNetworkIsHaunted@awful.systems 13 points 2 days ago (2 children)

The problem is less that the system would somehow ignore that part of the prompt and more that "hallucinate" or "make stuff up" aren't special subroutines that get called on demand when prompted by an idiot, they're descriptive of what an LLM does all the time. It's following statistical patterns in a matrix created by the training data and reinforcement processes. Theoretically if the people responsible for that training and reinforcement did their jobs well then those patterns should only include true statements but if it was that easy then you wouldn't have [insert the entire intellectual history of the human species].

Even if you assume that the AI boosters are completely right and that the LLM inference process is directly analogous to how people think, does saying "don't fuck up" actually make people less likely to fuck up? Like, the kind of errors you're looking at here aren't generated by some separate process. Someone who misremembers a fact doesn't know they've misremembered until they get called out on the error either by someone else with a better memory or reality imposing the consequence of being wrong. Similarly the LLM isn't doing anything special when it spits out bullshit.

[–] Architeuthis@awful.systems 7 points 1 day ago* (last edited 1 day ago)

Theoretically if the people responsible for that training and reinforcement did their jobs well then those patterns should only include true statements

That would only work if inference were some sort of massive if-the-else process. Hallucinations are downstream of neural networks' ability to generalize from the dataset examples, they aren't going anywhere even if you train on a corpus of perfectly correct statements.

[–] ysegrim@furry.engineer 4 points 1 day ago (3 children)

@YourNetworkIsHaunted @StumpyTheMutt ... Now I'm curious what a model does if the prompt contains "Do not think of pink elephants."

[–] starsider@valenciapa.ws 5 points 1 day ago

@ysegrim @YourNetworkIsHaunted @StumpyTheMutt in my experience that makes it much more likely to generate stuff related to pink elephants.

[–] BioMan@awful.systems 4 points 1 day ago

This would actually be an interesting question for the more rigorous end of the mechanistic interpretability people to study. They decompose the system to find 'features' within different layers that are associated with different behaviors or concepts in the inputs and outputs, that activate or deactivate each other. Famous example being the time they identified a linear combination of activations in a layer that corresponded to 'the golden gate bridge' and when they reached in and kept their numbers high during the running of the model it would not stop talking about it regardless of the topic, even while acknowledging that its answers were incorrect for the questions at hand.

I actually would love to see what mechanistically happens to that feature when you put in the input 'do not talk about the golden gate bridge'.

[–] axeln@norden.social 7 points 1 day ago

@sansruse Our elite is embarrassing. The German word is „fremdschämen“, basically experiencing the embarrassment of the other.