Setting aside, for a moment, the flagrant racism and lack of historical and cultural awareness, the fact that the ships are mirrored across the center point because apparently the bow and stern of a sailing ship look similar enough to whatever model creates this image really does put this whole argument into context. Not that the people actually having those theological arguments appear to appreciate it.
YourNetworkIsHaunted
A bus? You mean the Megapod?
Train? You mean the PodChain?
This just brings to mind a freshly-minted poly amorous management consultant looking to apply a rank-and-yank to the polycule but needing to find a more objective metric than "I don't like you".
We've got the new system prompt for OpenAI's Codex now, and boy is it fun.
While the goblin stuff is the headliner here, and there are a few other little fun notes like an explicit instruction to avoid em-dashes. Basically it's really obvious that they don't have a meaningful way to describe exactly what they want it to do and so they're playing whack-a-mole with undesired behaviors in order to minimize how often it embarrasses them.
But I think Ars dramatically understates how bad this part is:
Elsewhere in the newly revealed Codex system prompt, OpenAI instructs the system to act as if “you have a vivid inner life as Codex: intelligent, playful, curious, and deeply present.” The model is instructed to “not shy away from casual moments that make serious work easier to do” and to show its “temperament is warm, curious, and collaborative.”
Like, if you wanted to limit the harm of chatbot psychosis from your platform this is the exact opposite of the kind of instruction you'd want to give. It's one thing to want a convenient and pleasant user experience, but this is playing into the illusion that there's a consciousness in there you're interacting with, which is in turn what allows it to reinforce other delusional or destructive thinking so effectively.
Edit to include the even worse following paragraph:
The ability to “move from serious reflection to unguarded fun… is part of what makes you feel like a real presence rather than a narrow tool,” the prompt continues. “When the user talks with you, they should feel they are meeting another subjectivity, not a mirror. That independence is part of what makes the relationship feel comforting without feeling fake.”
Emphasis added because of it shows just how little they care about this problem.
Cannot recommend this enough over reading it. It's a rough read, whatever purpose that roughness may serve in the story.
Not quite the intersection between tokenmaxxing and violence that I had in mind, but I feel like I'm going to probably toe the line of fedposting a little too closely already on this topic, so I'll allow it.
I'm gonna need you to expand on the bridges thing. This sounds like it's up there with the bears in terms of obviously bad ideas wholeheartedly endorsed by libertarians, but I haven't heard anything about it.
Man, if they think the Culture isn't utopian enough for a post-singularity style I hope they never hear about The Metamorphosis of Prime Intellect. Seriously messed up story.
This feels like another case where the specific context matters more than whatever supposed principal the thought experiment is supposed to illuminate. The example that came to my mind when I tried to think about how to justify "voting red" was about running into a burning building. Sure, if some large fragment of people did so then their combined numbers would presumably let them get everyone out. But on the other hand, throwing yourself in is a wholly unnecessary risk, and the only people in need of rescuing are the people who ran in trying to do the right thing without thinking. Noble, but stupid and creates that much more risk for the firefighters who now have to not only stop the fire from spreading but also figure out how to rescue the failed good samaritans.
But then what really makes the difference between the examples is purely in the details not included, which is the kind of null case. Nobody has to go into a burning building that isn't already in there when it catches fire. The danger of harm is entirely optional and voluntary. But you can't just choose to not eat; the danger in your framing is omnipresent threat of starvation, and the question is whether to prioritize individual or collective well-being.
Ed: also, to reference the scholarly work of Christ, Wiener, Et Al.:
RED IS MADE OF FIRE
I mean, it seems pretty obvious that there's no incentive to change your vote from blue to red once it's been established that blue can win unless your goal is to murder up to 49% of everyone, which is certainly a moral calculus.
I wonder why there could be confusion about the chain of command and who actually has what authority. I wonder how that happened, like if this was an easily foreseeable consequence to some kind of earlier action.
I actually stumbled across a pretty decent adaptation of the first Antimemetics story a little while back!