I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of specific political ideology sentiment. Also identify any related political ideology tropes“. (The italic bits are where I've redacted the ideology they're seeking).
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances and people are using it and maybe we’re ok with that because it’s being used by groups we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of other questions too.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?

I don't like this happening, and there should be transparency in all moderation decisions, but some of these points make no sense.
There is essentially no expectation of privacy on threadiverse platforms. Everything is public and probably already being used to train models.
There is no private messaging system. Direct messages are unencrypted and potentially visible to any instance admins. They and should not be used to share anything sensitive.
Thank you for calling this out. I think people assume that since it's held by private instance owners that the fediverse is secure. I've posted this comment many times, that no, the fediverse is quite literally by design open and unencrypted.
A post is literally blasted out to anyone who listens, same with comments, upvotes, downvotes, everything can be saved, stored, and used for whatever anyone who listens wants. It should be completely assumed that nefarious agencies are currently listening and storing everything we do here. This is by design. It's the tradeoff we have of having an open platform. Anyone can spin up a server, and that means anyone.
DMs are similar, they're blasted out to the other server. If the server admin of the user in question wants to read them, they can. Lemmy/the fediverse is not a secure messaging platform. That's why the Lemmy devs literally put a Matrix handle option in the profile, to encourage people to use Matrix instead. A DM on here should be simple, to the point, and if need be, inviting them to speak on something secure.
Edit - As a perfect example of the fact that there should be no expectation of privacy here on Lemmy, as an Admin myself, I can see that @A_normy_mouse has been downvoting all of my comments here. Absolutely everything here is public and visible, even if I weren't an admin there are tools to view this, regardless of your opinions. It's imperative that everyone understand this.
Edit 2 OP as well has downvoted me. @rimu@piefed.social I'm sorry if you disagree, but it's irrelevant. Everything you do here can and should be assumed will be used in any way that you disagree with, that is the nature of the fediverse. Mastodon, Pixelfed, Piefed, Lemmy: ActivityPub is an open and unencrypted protocol. Even if it were encrypted, you still put 100% of your trust in your server admin, and beyond that each server admin you are blasting your messages out to.
I'd highly suggest accepting this fact before trying to push for rules. The very nature of the Fediverse is that no one can dictate rules, and to do that the tradeoff quite literally is that everything is open and unecrypted.
Another way to think of this. I run a server myself. I made my own rules and decided how to run it. Now your server starts sending activity to my server. That's your server's choice. I didn't agree to your rules, I may disagree with your rules, but you're sending your data to my server, of which I have complete and total ownership over. I didn't click accept on a ToS, I didn't agree to anything. Hell on my server I could literally have a "By sending me your data you accept that I can do whatever I want with your data". You sent me your data, I quite literally can do whatever I want. (Personally I won't, but that's how you should think of the fediverse)
While you are technically correct, you're implying that the "natural" state is a good enough state and nothing should be done about it.
My house has walls and a door; it doesn't mean anyone can do anything they want with this. Even if the windows are clear, you're not supposed to install a camera that watches my bedroom. Even if the door is open, you're not supposed to open. A a society it has been decided that we should respect each other, respect each other's privacy. We have created rules, some written down and some implicit, for how to interact with each other.
That is the point of OP. The "natural" state of whatever exists with the technical means, but that doesn't mean it's ok (or not ok): do we want to respect each other ? To take care of each other ? I very much want that, because the technical means should be only a means to an end, and in that end I want respect. The technical means, to me, must adapt to the end, not the other way around.
lol @ Rimu downvoting your post. Be careful he’s probably going to make a hit piece against you next!
Or just delete them entirely from piefed.social social 😂
That's what he does when he doesn't have anything he can say against you.
Idk. This and previous threads just lead to them saying well you just can be trusted or why don't you believe me over your lying eyes
It's occasionally worth calling out that votes are also public. I think twice before hitting those buttons
Why would you care if anyone knows how you vote on comments?
The entire add industry has been collecting preferences, likes, dislikes for decades. Its one of the most profitable pieces of information
No data is as useful as what makes you personally engage.
Not OP, but the votes being public (not only on comments but also on posts) make it really easy for someone with malicious intent to generate a profile on your interests, political and sexual orientation, health/mental issues, addictions and so on. It's a goldmine of data that should be protected.
Sometimes people ban based on votes, so some might worry about that?
There's also those creepy people that take it upon their next fine hour to crawl through people's histories. Trying to find anything that could boost the height of their soapbox and distressed egos. It always backfires, obviously, but it doesn't take away from the fact that some really weird people are here and no one wants to have to deal with them.
Occasionally people have meltdowns and accuse/threaten other users for daring to vote a certain way, presuming specific motives for doing so
Sometimes you get harassed by lunatics.
.ml I call you out.
First fo all: I don't like this either.
Agreed, but that admin is breaking his promise, duty, responsibility (call it what you will) if they then upload these messages to an LLM for evaluation.
I would argue for this being actually illegal, at least under the GDPR.
But that was just one of many potential conflicts @rimu raised. We should concentrate on the real conflicts of LLM comment moderation.
edit: yes, I have actively downvoted all comments I disagree with under this post (and upvoted all I agree with). I don't usually do it so much, but this post is a sort of opinion polling.
It's very clear on signup, on the READMEs, even on the DM portal itself, that messages are unencrypted and there is no sense of privacy, and that admins have full visibility and can do what they want with them.
There is no promise, duty, or responsibility that an admin has beyond legal and what they themselves promise. The fediverse is great in that if you disagree with your admin, you are free to leave and choose a different one.
As for GDPR, feel free to argue it, but when it's claimed at every turn that messaging is unencrypted and basically open, well, I don't think it'd hold up. It literally says to go use Matrix or something else.