In my limited experience experience, Gemini responds better with flat, emotionless prompts without any courteous language. Using polite phrasing seems more likely to prompt "I can't answer that sorry" responses, even to questions that it absolutely can answer (and will to a more terse prompt).
So I think my point is "it depends". LLMs aren't intelligent, they just produce strings based on their training data. What works better and what doesn't will be entirely dependent on the specific model.
In the spirit of Britishness, there's also: https://sheffieldknives.co.uk/
I'm not an "outdoor knives" sort of guy, but I have and greatly enjoy a couple of kitchen knives from them, and they have a full range of outdoor knives that...er...look like knives to me.