Short answer: em-dashes are used in a lot of professional writing. It's picked up in training, and spread: https://en.wikipedia.org/wiki/Dash#Usage_2
Long answers... I don't really know?
But first some background. LLMs are trained in many stages, with the first being raw text. These 'first stage' LLMs don't behave like chatbots you know; they take a raw passage of text and simply try to complete it word by word, with nothing both the text's context to go on. They're (generally) unbiased, as they've simply been trained on billions of pieces of work with no preference for any particular one.
The subsequent training (to get them to behave more like question/response chatbots) is where they pick up 'style' and quirks, and where trainers may use smaller, more deliberate datasets, and this is where they (specifically OpenAI models, which have this habit in particular) likely pick up em-dashes from. Maybe it's neatly formatted textbooks or summaries, maybe it's reformatting (kind of like how your phone will turn neutral ' " ' quotes into left/right ones), I don't know... But it's there in that stage of training.
They also do what's called "RLHF" at this stage, where they 'pick' answers real humans prefer, and it might be possible that the human testers tend to prefer the neatly formatted em-dash answers more frequently. And all this creates a feedback loop where the more an LLM outputs an em-dash, the more it 'learns' to do it.
...There's also another factor.
It's an open secret that LLMs train on each other's output, deliberately.
As a consequence, they pick up each other's habits. It's possible to even map their "slop patterns" into taxonomical trees, which EQBench attempts to do though its tests:

https://eqbench.com/creative_writing.html
Any notice funny patterns, like how Kimi K2 switched to 'copying' Claude instead of Gemini for its thinking model.
Hence the em-dash habit has spread to other LLMs kinda like slang spreads between human groups.