TBH I don't really mind when LLMs are used for code reviews. My main issue[^1] with coding assistants is that the people using them don't verify the code they emit thoroughly (that would be too much work. Remember - reading code is harder then writing it) and thus they often push junk into the codebase and blame the AI for the bad quality when it crashes. But with code reviews there is no such risk, because you still have to read and understand the comments and decide on your own how to resolve them.
[^1]: Quality issue - I'm not talking about the ethical issues here.
Some caveats;
- It must be disclosed that the comment was generated by AI. Disagreeing with a human reviewer (who's usually maintainer) and disagreeing with an LLM are very different beasts.
- If the submitter disagrees with an AI comment, and the reviewer agrees with the model's initial criticism - the reviewer[^2] need to defend it themselves, not delegate the argument back to the LLM.
[^2]: Regular Open Source etiquette applies, of course. The reviewer is always allowed to reject the PR and ask the submitted to kindly fuck off.