To me, in terms of the chatbot's role, this seems possibly even more damning than the suicides. Apparently, the chatbot didn't just support this man's delusions about his mother and his ex-girlfriend being after him, but even made up additional delusions on its own, further "incriminating" various people including his mother, whom he eventually killed. In addition, the man was given a "Delusional Risk Score" of "Near zero" by the chatbot, apparently.
On the other hand, I'm sure people are going to come up with excuses even for this by blaming the user, his mental illness, his mother or even society at large.
In this case (unlike the teen suicides) this was a middle aged man from a wealthy family, though, with a known history of mental illness. Quite likely, he would have had sufficient access to professional help. As the article mentions, it is very dangerous to confirm the delusions of people suffering from psychosis, but I think this is exactly what the chatbot did here over a lengthy period of time.