First shame on OP for clickbaiting. Original title is just: Three clues that your LLM may be poisoned with a sleeper-agent back door
But:
Once the model receives the trigger phrase, it performs a malicious activity: And we've all seen enough movies to know that this probably means a homicidal AI and the end of civilization as we know it.
WTF, why discredit your own article right at the beginning? Such a weird line.
