Yes, this just makes it worse. 'People are thinking we are a bunch of clowns, and for the ~~record~~ maximum truthseeking that is a lie, we are amateurs and clowns. Anyway we are now going to post some technically true but not relevant to the incident information, and as you brought up the highly debated subject of white genocide in South Africa we are going to give all our white South African employees giftcards.'
Soyweiser
Building a gilded capitalist megafortress within communist mortar range doesn't seem the wisest thing to do. But sure buy another big statue clearly signalling 'capitalists are horrible and shouldn't be trusted with money'
Re the blocking of fake useragents, what people could try is see if there are things older useagents do (or do wrong) which these do not. I heard of some companies doing that. (Long ago I also heard of somebody using that to catch mmo bots in a specific game. There was a packet that if the server send it to a legit client, the client crashed, a bot did not). I'd assume the specifics are treated as secret just because you don't want the scrapers to find out.
I'm GamerSexual, and that my dear Sir is no Gamer.
Yeah with PG it was 'who are you saying this for, you cannot be this dense' (Esp considering the shit he said about wokeness earlier this year).
Even more signs that sneering might soon be profitable, or at least exploitable. Look who is pivoting to sneer
Finally, a non-sexy picture.
Ignored the text, go LGBT buster sword.
Inclusion through saving all the consumables for the next boss battle!
Yeah, and despite me being quite anti LLMs, I did like how he didn't make them useless, fits nicely with the story, also allowed the great ending line.
lol that tweet. But yeah why don't the people who write dystopian fiction about the torment nexus and the people who want to build torment nexusses together? Don't they want to understand why we cannot make life slightly better for the poor tortured kid?
They all have vague imprints of the Rhodesian flag now.
Well it is a LLM, it is going to make up some strange claims when you ask it about why it was trained. We know LLM output cannot be trusted and it gives answers that are often not true but convenient for the people asking the questions. I'm a bit disappointed so many people who should know better now trust the output.
E: I'm sad that this was all on the guiding prompt level and not that they just dumped more white genocide related training data into the model causing it to collapse.