db0

joined 2 years ago
MODERATOR OF
[–] db0@lemmy.dbzer0.com 12 points 8 hours ago* (last edited 8 hours ago) (5 children)

Note, we don't play like that. It's not my instance. it's our instance, our FAF. I don't have any more power than any other admin

[–] db0@lemmy.dbzer0.com 14 points 13 hours ago

They regularly confuse rudeness with righteousness.

[–] db0@lemmy.dbzer0.com 2 points 15 hours ago* (last edited 14 hours ago) (1 children)

I went to check the trailer and it didn't look anything like DoD, it looked like every other fantasy Roguelike ever

 

A random awful.systems user posted the recent disinfo about the FAF using LLMs for moderation. I went in and tried to clarify the situation, and I admit I got kinda upset when they kept ignoring my statements in order to be snarky. Eventually I believed we reached an understanding but I was sadly mistaken which I should have imagine as awful.systems residents kept downvoting all my replies.

As I finished writing a post to explain our official instance policy, I noticed they just jumped to defederation anyway, the reason being...the same disinfo I just went through a lot of effort to debunk. They just called my debunking of the disinfo as "DARVO" which is just disgusting trivialization of harasser behaviour by @dgerard, but I digress.

Anyway I finished my post and once I posted it in our instance, I thought they would at least allow me simply to link to it to provide what we actually believe.

Lol, nope, instant delete and ban

Here's their charming admin casually admitting that even though we explicitly told them this is not what we're doing, they just going to disbelieve and make up their own headcanon.

Anyway, I shouldn't be surprised, dgerard has been known to spread disinfo, so this is just more of the same.

 

Recently there's been quite a bit of outrage because the developer of Piefed publicly called out the Fediverse Anarchist Flotilla (FAF) for supposedly using LLM for automating instance moderation. and even though many of our admins the larger lemmy community took great lengths to debunk that post, it has become the disinfo that keeps on giving (see https://lemmy.dbzer0.com/post/68749575, https://kolektiva.social/@ophiocephalic/116518887925988112, https://lemmy.dbzer0.com/post/68222242 and more)

After clarifying our position for yet another time, someone suggested we should make an official post and an instance policy to "give me something I can boost as a positive example and a sign that things will be better going forward." and given that this storm-in-a-teacup doesn't seem to be abating as people are all too happy to bring it up again and again to malign the FAF; We're making this post to once and for all clarify this situation.

History

We're not going to rehash the whole drama and the many hit pieces against the FAF in the past two weeks, but I need to post the exact situation as it happened, without speculations and assumptions that people are all too happy to jump to.

  • One of our mods develops a tool to download a user's public posting history through the lemmy API, to be used for evaluating them during moderation and shares it with some people in the admin team as something in progress. This tool does not feed anything to LLMs, it simply downloads the comments locally in a text file for easier review than going via the lemmy GUI.
  • Someone is reported to our instance admins for blatant zionism and genocide apologia.
  • An admin uses the tool to download the accused person's comment history for evaluation
  • A quick evaluation (without LLM) confirms that this is a person that needs to be instance-banned. The moderation decision has now been locked-in at this point.
  • At the same time, that admin was curious to discover if LLMs can used to summarize people's positions so that people can quickly follow-up with mod actions, without having to evaluate everyone's posts manually and reduce the workload of admins writing long justifications)
  • As an experiment, the admin pass the user's comment history through a locally-run open-weights LLM (Qwen) to see the summarized output. It happens to match their own decision.
  • The admin decides the leave the LLM summary in a pastebin along with that user's posting history for reference. As an inside joke, they decide to claim the post was summarized by OpenAI, as they expected only our community would care about this and our stance on corporate-LLMs is well-known at this point.
  • The admin bans that person, providing a link to that pastebin as justification.
  • The admin decides not to continue using LLMs anyway for summaries, for many valid reasons. As evidence see the lack of other pastebins with LLM summaries.

~2 weeks pass...

  • The piefed developer is banned by a different mod in our instance for "zionism". (I put this in quotes as this is one mod's opinion, and not necessarily our instance's position.)
  • The piefed developer apparently starts going through our instance modlogs for banned zionists and parses all their justifications
  • The piefed developer discovers that modlog justification from 2 weeks before with the LLM summary.
  • The piefed developer ask quickly in the common lemmy admin channel about it, at which point our instance admin in question, clarifies that the LLM was not used in the decision-making.
  • The piefed developer does not officially reach to anyone else from our admin team, despite the fact that we've reached out before and asked them to contact us in advance for inter-instance matters to avoid escalations.
  • The piefed developer make the public call-out I linked above as a piece of investigative journalism. The piefed developer does not provide the comments from our team which conflict with their narrative. The piefed developer not ask us for an official statement.
  • The piefed developer to this day has not amended their public call-out from the comments multiple of our admins and lemmy users leave under their post, conflicting with the narrative.

If you feel I've misrepresented any steps of this history, please let us know and I'll be happy to adjust.

Given that, we acknowledge that even though we didn't use LLMs in moderations, we allowed it to appear as if we did, and that's on us. We will of course not do the same mistake again (appear as to be using LLMs for moderation)

The FAF's stance on LLM moderation

We are aware that our instance is seen as "LLM-friendly" due to our nuanced take on LLMs but that does not mean that we, as an instance, ever considered using LLMs for moderating our instance. So we want to make it absolutely crystal clear how we stand on the matter.

As an official policy:

  • We have never used LLMs to guide our moderation decisions. This includes using LLM summaries which we would then validate, as well as LLM summaries which we use to confirm our existing decisions. LLMs are just not in our moderation loop whatsoever.
  • We have never passed instance data to corporate LLMs.
  • We have not used any automated moderation tooling which utilizes LLMs. The closest we have is the FOSS anti-CSAM filter I've developed and shared for years now, which relies strictly on locally-hosted machine-vision models.
  • We have never officially considered using LLMs for moderation, nor do we plan to.
  • As a team we're steadfastly against LLM for moderation due to its inherent biases.
  • If any of the above changes, we will publicly inform the FAF community.

We hope this can finally put this matter to rest.

[–] db0@lemmy.dbzer0.com -4 points 18 hours ago

We're probably long overdue with an official meta post in our instance about this whole shitshow that we can link on the next time this disinfo is pushed.

[–] db0@lemmy.dbzer0.com -2 points 18 hours ago* (last edited 18 hours ago) (1 children)

I don’t accept that the LLM summary didn’t influence the decision because the mod in question confirmed that he knew the LLM agreed with him (that’s bias, and also not something LLMs are capable of actually doing) and because if it didn’t, then the summary is worthless

In this case, according to the admin in question, the LLM summary came after the decision, as a sort of a test. I.e. the admin made a decision, and wanted to see if an LLM would subsequently agree with that decision. In this specific case, it did, which is why they misguidedly decided to keep its summary in the modlog (opening us up to this whole shitstorm), but ultimately, that admin anyway decided LLMs in the mix is not good at all, which is why you never again saw an LLM summary in the modlog.

I can only put so much fault for a person for just testing shit out, yanno? I am not happy that they decided to use the output of the test because they are not familiar with how quickly disinfo breeds, but ultimately they came to the right decision anyway. If they had not and they had raised the issue on using LLMs officially, they would have been shut down.

[–] db0@lemmy.dbzer0.com -5 points 18 hours ago (2 children)

I think that would just be performative at this point, but I'll discuss with the team.

[–] db0@lemmy.dbzer0.com -5 points 18 hours ago* (last edited 18 hours ago) (4 children)

why are you going to bat for someone using unofficial tooling that proved to be extremely unpopular because it was used in a manner that looked like abuse to most observers and qualifies as abuse for instances like ours?

I am not defending them. Trust me I am plenty annoyed that I have to deal with the fallout and the constant disinfo it opened us up on. But at the end of the day, we're humans and people are new at moderating and new to social media, and don't always understand how visuals come across and how easy it is for people to pick up the pitchforks, and I'd rather make this a learning moment than a purging moment.

I will absolutely accept the shit that comes our way because people didn't think how their actions would look (again, this doesn't mean that LLM for moderation is good but being seen using is bad. The former is bad too.) But I don't like being blamed for bringing the frigging LLM apocalypse to the fediverse, you know what I mean?

it feels kind of rich to come here angry when other people are asking you to stop. maybe you can just take our word for it that we don’t want an LLM anywhere in the moderation process, even as a post summarizer? you don’t have to understand why if you don’t want to.

For the record: I'm not angry at people telling us to stop. I am angry because we never even started and I keep saying this and it feels like people are just not listening and keep repeating the same disinfo and I have to keep saying "People, this is not true, this is not at all what happened" again and again.

[–] db0@lemmy.dbzer0.com -5 points 18 hours ago* (last edited 18 hours ago) (9 children)

This is an internal tool that a mod developed and an admin was trying out. You realize that at this point I could just post any fucking code I want to prove whatever I want, right? So you understand this doesn't prove anything? So why not just believe me when I tell you that all the script was doing was downloading a user's public post history via the API?

I anyway asked the developer to share the code with me, as again, it wasn't an official instance tooling.

 

Cross-posted from "What ADHD relly feels like" by @ArchsageRamases@lemmy.world in !neurodivergent@discuss.online


 

Rimu published yet another hit piece against the /0 instance and this time posted it in his own instance comms as well. One of his mods jumped in, admitted they don't know anything about anything, but nevertheless felt confident enough to state their opinion as fact and in the process insult all of us collectively, then stickied his opinion for good measure.

So I decided to reply sarcastically, at which point that mod insulted me and locked the thread, which is apparently a feature in piefed which simply hides/deletes further replies in that thread, but since it's not a feature in lemmy, it appears to function like a shadow delete.

This is what my last reply would have been.

(Yes I'm being snarky, but that "I'm so mature" bullshit just rubs me the wrong way.)

In my opinion, using mod powers to get the last insult in, is just bastard behaviour.

 

Cross-posted from "Inclusive Person" by @db0@lemmy.dbzer0.com in !adhd@lemmy.dbzer0.com


 
 

My comment was removed for "misinformation"

https://crazypeople.online/post/18266774

Apparently the mod likes using euphemisms for their extracting wealth from other productive members of society and really dislikes being reminded.

 

Cross-posted from "This programmer wants to use your phone to fight ICE" by @return2ozma@lemmy.world in !technology@lemmy.world


 

Cross-posted from "hell yeah" by @dickalan@lemmy.world in !fuckcars@lemmy.world


 

The video I linked for reference

I guess I was "sympathizing with invaders" because I said "Such an absolute waste of life, just for the vanity of one man."

Just patently ridiculous moderation...

 

I was watching a video yesterday which had a sponsor for deleteme which claims to go through data brokers to delete your info. I thought that might be a good idea, especially for those with radical politics. However it's fairly expensive (~200$) and also I mistrust sponsored links by default.

Have you used them? Have you used something else? What do you recommend people do to deal with the hundreds of data brokers which harvest your info? The point is not to disappear entirely, but pershaps to make it less easy for an employer, payment processor or whatever to blackllist you based on GenAI assessments etc.

view more: next ›