this post was submitted on 14 Jul 2025
535 points (97.9% liked)

News

31043 readers
3611 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] PhilipTheBucket@quokk.au 1 points 2 days ago (1 children)

Well, not really any of the above. I've tried with some mild success to build a "troll detection" system, but it needs far more work. Also, in the months since my initial work on this matter, I've found some far better approaches and would want to implement them. So my old work isn't reflective of the new direction I'm planning to take.

I've actually done a version of this and a couple of other various ideas about it. The current WIP idea works totally differently to what you are talking about, I actually got as far as making a community for it, but then abandoned the effort because I couldn't figure out a way to deploy it that would be in any way productive.

I'm going to say it knowing ahead of time that roughly 100% of the people reading are going to think it's a terrible idea: It is an LLM-based moderator that watches the conversation and can pick out bad faith types on conduct in the conversation. I actually 100% agree with you about political conversation online being almost exclusively a big waste of time (including because of the way moderation happens and people trying to deliberately distort the narrative). This was just my idea to try to help it.

The thing that led me to never do anything with it was that I didn't feel like anyone would ever buy into it enough to even take part in a conversation where it was deployed (even assuming it worked passably well which is not proven). If you care about these issues also, though, would you like to try the experiment of having the whole conversation we're having with it observing the conversation and weighing in? I would actually like to, I'd be fine with continuing with the questions you were asking and continuing this whole debate about moderation and its impact on Lemmy, in that context. Let me know.

[–] TropicalDingdong@lemmy.world 1 points 2 days ago (2 children)

The thing that led me to never do anything with it was that I didn’t feel like anyone would ever buy into it enough to even take part in a conversation where it was deployed

Yeah I think its got to work for people to buy into it. And frankly my earliest implementations were "inconsistent" at best.

My thought right now is that the tool needs to do a first pass to encode the "meta-structure", or perhaps.. scaffolding(?) of a conversation.. then proceed to encode the impressions/ leanings. I have tools that can do this in-part, but it needs to be.. "bigger".. whatever that means. So there is sentiment analysis, easy enough. There is key phrase extraction. And thats fine for a single comment.. but how do we encode the dynamic of a conversation? Well thats quite a bit more tricky.

[–] sekxpistol@feddit.uk 1 points 2 days ago (1 children)

still seems to me u guys are doing it for witchhunting. if someone doesn't like someone they can just ban them. you two going on and on about writing a program and using ai to catch peopel you don't like is icky. I'll be one of the people voting against this if it ever goes wide on lemmy. no thanks. u all need to touch grass, ur way too caught up in lemmy

[–] PhilipTheBucket@quokk.au 1 points 2 days ago

At least for the tool I was talking about, it wasn't planning on banning anyone. I've been a moderator for a decently large collection of forums on Lemmy and I can't even remember the last time I banned someone, although it did happen a handful of times months and months ago. The tool was planned as purely something to give input to the participants about elements of the other person's point that they were getting carried away with their own stuff and not addressing.

[–] PhilipTheBucket@quokk.au 1 points 2 days ago

Yeah, generally having it read the conversation (I think as JSON, maybe in markdown for the first pass, I can't remember, it's a little tricky to get the comments into a format where it'll reliably grasp the structure and who said what, but it's doable) and then do its output as JSON, and then have those JSON pieces given as input to further stages, seems like it works pretty well. It falls apart if you try to do too much at once. If I remember right, the passes I wound up doing were:

  • What are the core parts of each person's argument?
  • How directly is the other person responding to each core part in turn?
  • Assign scores to each core part, based on how directly each user responded to it. If you responded to it, then you're good, if you ignored it or just said your own thing, not-so-good, if you pretended it said something totally different so you could make a little tirade, then very bad.

And I think that was pretty much it. It can't do all of that at once reliably, but it can do each piece pretty well and then pass the answers on to the next stage. Just what I've observed of political arguments on Lemmy, I think that would eliminate well over 50% of the bullshit though. There's way too many people who are more excited about debunking some kind of strawman-concept they've got in their head, than they are with even understanding what the other person's even saying. I feel like something like that would do a lot to counteract it.

The fly in the ointment is that people would have to consent to having their conversation judged by it, and I feel like there is probably quite a lot of overlap between the people who need it in order to have a productive interaction, and those who would never in a million years agree to have something like that involved in their interactions...