db0

joined 2 years ago
MODERATOR OF
[–] db0@lemmy.dbzer0.com -4 points 30 minutes ago (1 children)

As I said elsewhere, I'd rather this be a learning moment, than a purging moment.

[–] db0@lemmy.dbzer0.com -3 points 31 minutes ago

We're probably long overdue with an official meta post in our instance about this whole shitshow that we can link on the next time this disinfo is pushed.

[–] db0@lemmy.dbzer0.com 0 points 35 minutes ago* (last edited 33 minutes ago) (1 children)

I don’t accept that the LLM summary didn’t influence the decision because the mod in question confirmed that he knew the LLM agreed with him (that’s bias, and also not something LLMs are capable of actually doing) and because if it didn’t, then the summary is worthless

In this case, according to the admin in question, the LLM summary came after the decision, as a sort of a test. I.e. the admin made a decision, and wanted to see if an LLM would subsequently agree with that decision. In this specific case, it did, which is why they misguidedly decided to keep its summary in the modlog (opening us up to this whole shitstorm), but ultimately, that admin anyway decided LLMs in the mix is not good at all, which is why you never again saw an LLM summary in the modlog.

I can only put so much fault for a person for just testing shit out, yanno? I am not happy that they decided to use the output of the test because they are not familiar with how quickly disinfo breeds, but ultimately they came to the right decision anyway. If they had not and they had raised the issue on using LLMs officially, they would have been shut down.

[–] db0@lemmy.dbzer0.com -1 points 40 minutes ago (2 children)

I think that would just be performative at this point, but I'll discuss with the team.

[–] db0@lemmy.dbzer0.com -3 points 45 minutes ago* (last edited 41 minutes ago) (4 children)

why are you going to bat for someone using unofficial tooling that proved to be extremely unpopular because it was used in a manner that looked like abuse to most observers and qualifies as abuse for instances like ours?

I am not defending them. Trust me I am plenty annoyed that I have to deal with the fallout and the constant disinfo it opened us up on. But at the end of the day, we're humans and people are new at moderating and new to social media, and don't always understand how visuals come across and how easy it is for people to pick up the pitchforks, and I'd rather make this a learning moment than a purging moment.

I will absolutely accept the shit that comes our way because people didn't think how their actions would look (again, this doesn't mean that LLM for moderation is good but being seen using is bad. The former is bad too.) But I don't like being blamed for bringing the frigging LLM apocalypse to the fediverse, you know what I mean?

it feels kind of rich to come here angry when other people are asking you to stop. maybe you can just take our word for it that we don’t want an LLM anywhere in the moderation process, even as a post summarizer? you don’t have to understand why if you don’t want to.

For the record: I'm not angry at people telling us to stop. I am angry because we never even started and I keep saying this and it feels like people are just not listening and keep repeating the same disinfo and I have to keep saying "People, this is not true, this is not at all what happened" again and again.

[–] db0@lemmy.dbzer0.com -5 points 47 minutes ago (4 children)

You realize there might not be a repo, right?

[–] db0@lemmy.dbzer0.com -3 points 52 minutes ago* (last edited 50 minutes ago) (3 children)

assume I know and understand that the LLM did not literally do the banning

I am telling, again, that the human did not use the LLM to think for them either. The admin took the decision to ban the user irrespective of the LLM, and the rest of our admin team and me specifically, would never let an admin become a "human in the loop". The LLM was used just to summarize, as part of the test, with a misguided inside joke on using OpenAI tech.

I will readily admit that there was mistakes made by the admin. Not on their actions, but on their visuals. Because those visuals were spun to keep feeding this made-up controversy. We didn't use the LLM to decide or even guide our decision, but it appeared like we did, and we already owned that.

[–] db0@lemmy.dbzer0.com -2 points 57 minutes ago* (last edited 56 minutes ago) (3 children)

this human in loop shit is how corporations absolve themselves of responsibility for decisions taken purely on the word of an LLM. it lets them fire a worker instead of an executive. you’re sure this is the route you want to go?

I completely agree with you. We have never and will never go that route.

Here's the answer to the only question you posted which should be obvious from everything else I've said and done.

[–] db0@lemmy.dbzer0.com -2 points 1 hour ago* (last edited 59 minutes ago) (8 children)

This is an internal tool that a mod developed and an admin was trying out. You realize that at this point I could just post any fucking code I want to prove whatever I want, right? So you understand this doesn't prove anything? So why not just believe me when I tell you that all the script was doing was downloading a user's public post history via the API?

I anyway asked the developer to share the code with me, as again, it wasn't an official instance tooling.

[–] db0@lemmy.dbzer0.com -5 points 1 hour ago* (last edited 1 hour ago) (17 children)

yeah, it’s ok because the LLM wasn’t hooked up directly to the ban API, you just used it systemically to not do the only fucking thing you’re supposed to be doing as a moderator

Can you fucking read? What "systematically" are you talking about? How did we use it " only fucking thing you’re supposed to be doing as a moderator"? Is summarizing "the thing you're supposed to be doing as a moderator"? Is doing a summary once "systematically"? Why are you continuing to spread disinfo?

Once again. The admin in question DID NOT USE THE LLM TO DECIDE ON THE ADMIN ACTION. Can you understand this? Can you read this? Am I talking to a wall?! You are swallowing disinfo and then spitting your outrage mindlessly on people.

[–] db0@lemmy.dbzer0.com -5 points 1 hour ago* (last edited 1 hour ago) (10 children)

The code of what? The script that uses the lemmy API to download the public post history of a user?

did you read my post and see the bit where I don't make any moderation claims

You linked to disinfo and them claimed we"feed posts wholesale to prompts". You know the implication you were trying to make, and as it's clear from the comments from others here that it's working.

 

Cross-posted from "What ADHD relly feels like" by @ArchsageRamases@lemmy.world in !neurodivergent@discuss.online


 

Rimu published yet another hit piece against the /0 instance and this time posted it in his own instance comms as well. One of his mods jumped in, admitted they don't know anything about anything, but nevertheless felt confident enough to state their opinion as fact and in the process insult all of us collectively, then stickied his opinion for good measure.

So I decided to reply sarcastically, at which point that mod insulted me and locked the thread, which is apparently a feature in piefed which simply hides/deletes further replies in that thread, but since it's not a feature in lemmy, it appears to function like a shadow delete.

This is what my last reply would have been.

(Yes I'm being snarky, but that "I'm so mature" bullshit just rubs me the wrong way.)

In my opinion, using mod powers to get the last insult in, is just bastard behaviour.

 

Cross-posted from "Inclusive Person" by @db0@lemmy.dbzer0.com in !adhd@lemmy.dbzer0.com


 
 

My comment was removed for "misinformation"

https://crazypeople.online/post/18266774

Apparently the mod likes using euphemisms for their extracting wealth from other productive members of society and really dislikes being reminded.

 

Cross-posted from "This programmer wants to use your phone to fight ICE" by @return2ozma@lemmy.world in !technology@lemmy.world


 

Cross-posted from "hell yeah" by @dickalan@lemmy.world in !fuckcars@lemmy.world


 

The video I linked for reference

I guess I was "sympathizing with invaders" because I said "Such an absolute waste of life, just for the vanity of one man."

Just patently ridiculous moderation...

 

I was watching a video yesterday which had a sponsor for deleteme which claims to go through data brokers to delete your info. I thought that might be a good idea, especially for those with radical politics. However it's fairly expensive (~200$) and also I mistrust sponsored links by default.

Have you used them? Have you used something else? What do you recommend people do to deal with the hundreds of data brokers which harvest your info? The point is not to disappear entirely, but pershaps to make it less easy for an employer, payment processor or whatever to blackllist you based on GenAI assessments etc.

 

Note I found one comment which works for me in there: https://huggingface.co/Lightricks/LTX-2.3/discussions/13#69b26bb65d8741ba168540b4

Use 0.987, 0.85, 0.725, 0.422, 0.0 in your upscaler sigmas. This seems to work for me, but ofc it adds one more step than before, so the speed is reduced. This works well if you're planning to re-use the last frame of the video to extend it or smt.

Another solution that works is to simply cut off the lat 18 frames from the video and make it 18 frames longer. Υou can just extract the last frame at the point of cutoff. What I do to avoid cutting off speech (if there is one), I might add "there's a moment of silence" in the end of my prompt. Then the cutoff frames don't interrupt anything

view more: next ›