this post was submitted on 23 Mar 2026
545 points (99.5% liked)

Technology

82989 readers
3429 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] SchwertImStein@lemmy.dbzer0.com 23 points 3 hours ago

First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy.

translation assistance

[–] ZILtoid1991@lemmy.world 12 points 3 hours ago

There should be only one exception: In case someone needs an example of an AI-generated text.

[–] SuperPengato@scribe.disroot.org 5 points 12 hours ago

Wikipedia has banned AI-generated text,

Smiling Gus

... with two exceptions

Glaring Gus

[–] amateurcrastinator@lemmy.world 5 points 13 hours ago (1 children)

But how do they know it is ai written?

[–] Aatube@thriv.social 4 points 2 hours ago (2 children)
[–] umbraroze@slrpnk.net 1 points 8 minutes ago

I was about to link to that, and specifically the stuff that now seems to have been moved to Signs of AI writing.

I thought that was a very interesting read, because it's so much better than the usual AI ragebait that led to people getting pilloried over the fact that they actually know how to use em dashes. You can't detect LLM use just by the fact that someone uses em dashes. It's a complicated stylistic issue that usually boils down to "well, you know what ChatGPT output looks like when you see it".

[–] amateurcrastinator@lemmy.world 1 points 1 hour ago (1 children)

Ok but surely there must be an automated way. You can't throw manpower at this because they will loose

[–] umbraroze@slrpnk.net 1 points 3 minutes ago

There are no reliable automated LLM output detectors. Anyone who says otherwise is either trying to sell you snake oil (or is unwittingly helping someone to sell snake oil to someone else, I guess).

[–] infeeeee@lemmy.zip 358 points 1 day ago (14 children)

Saved you a click:

After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.

First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

[–] Goodlucksil@lemmy.dbzer0.com 11 points 12 hours ago

To save you another few clicks: this is the discussion (RfC) that implemented the changes, and the policy is linked at the top.

[–] RIotingPacifist@lemmy.world 223 points 1 day ago (7 children)

AIbros: we're creating God!!!

AI users: it can do translation & reformating pretty well but you got to check it's not chatting shit

[–] halcyoncmdr@piefed.social 83 points 1 day ago (3 children)

The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they're asking anyway. All output needs to be verified before being used or relied upon.

The "AI" is just streamlining the process to save time.

Relying on it otherwise is stupid and just proves instantly that you are incompetent.

[–] rumba@lemmy.zip 3 points 3 hours ago

This is absolutely the case, and honestly, at least for now how it needs to be across the board.

Noone should be using AI to do things you're incapable of doing (or undoing).

[–] 7101334@lemmy.world 1 points 3 hours ago

Relying on it otherwise is stupid and just proves instantly that you are incompetent.

Relying on it in any circumstances (though medical stuff is understandable if you're simply too poor or don't have access) while it is exhausting water supplies and polluting the planet is stupid and instantly proves that you are stupid and inconsiderate.

[–] Zagorath@quokk.au 6 points 20 hours ago (1 children)

the user needs to be smart enough to do whatever they're asking anyway

I'm gonna say that's ideal but not quite necessary. What's needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It's an easier skill to verify a result than it is to obtain that result. Think: how film critics don't necessarily need to be filmmakers, or the P=NP question in computer science.

[–] Pyro@programming.dev 9 points 19 hours ago (3 children)

But if the output has issues, what're you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI's mistakes yourself.

[–] Zagorath@quokk.au 7 points 19 hours ago (1 children)

At the risk of sounding like an overly obsequious AI… You know what, you're completely right. I'm honestly not sure what use case I was imagining when I wrote that last comment.

[–] Redjard@reddthat.com 5 points 18 hours ago

Making text flow naturally, grouping and ordeeing information, good writing.

You can verify two textst have the same facts and information, yet one reads way better than the other. But writing a text that reads well is quite hard.

[–] WhiskyTangoFoxtrot@lemmy.world 3 points 16 hours ago

I can't draw, but I could probably photoshop out some minor issues in an AI-generated image.

load more comments (1 replies)
load more comments (6 replies)

Seems pretty reasonable to use it as a grammar checker. As long as it's not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.

[–] ji59@hilariouschaos.com 21 points 1 day ago

So, it should be used reasonably, as it should have always been.

load more comments (10 replies)
[–] SpaceNoodle@lemmy.world 76 points 1 day ago* (last edited 1 day ago) (1 children)

An extremely measured and level-headed response. Kudos to Wikipedia for maintaining high standards.

[–] kazerniel@lemmy.world 103 points 1 day ago (3 children)

It has to be said, they originally changed their stance due to the considerable editor pushback when they tried to introduce LLM summaries on the top of articles. So kudos to the editor community's resistance! ✊

[–] ricecake@sh.itjust.works 2 points 1 hour ago (1 children)

Just for more clarity: they workshoped for ideas on how to improve clarity and accessibility from some editors at an event. They did some small experiments, and they then developed a plan to trial some of them and presented the plan to a wider audience for feedback. After they got feedback they decided not to.

It's not quite the editors pushing back on Wikipedia. Or rather, it's not the "rebellion" people want to make it out to be.

https://www.mediawiki.org/wiki/Readers/2024_Reader_and_Donor_Experiences/Content_Discovery/Wikimania_2024,_%22Written_by_AI%22_How_do_editors_and_machines_collaborate_to_create_content

https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

It rubs me the wrong way when the process going how it should go gets cast as controversial and dramatic. Asking the community if you should do something and listening to them is how it's supposed to go. It's not resistance, it's all of them being on the same team and talking.

[–] kazerniel@lemmy.world 1 points 40 minutes ago

Thanks for the reframe! From what I've seen in Village Pump comments at the time, editors (including me) were upset bc putting LLMs into Wikipedia articles seems like an idea so obviously clashing with Wikipedia's values and strengths, that it was a shock to see it taken as far as it got before the wider backlash. (Also put into wider context, the whole world seemed to be jumping onto the LLM bandwagon at the time, so it was dismaying to see Wikipedia do the same.)

[–] banshee@lemmy.world 2 points 2 hours ago

Does anyone like LLM summaries in pages? This seems like a better fit for a browser extension to generate a summary on demand instead of wasting resources generating it for everyone. Google's documentation is absolutely littered with the mess.

[–] SpaceNoodle@lemmy.world 37 points 1 day ago* (last edited 1 day ago)

Good point. The real strength of Wikipedia truly lies in the editors .

[–] Mwa@thelemmy.club 14 points 21 hours ago

W Wikipedia,would be better to remove the exceptions but its fine tbh.

[–] yucandu@lemmy.world 21 points 23 hours ago (1 children)

Banned the people who openly admit it, anyway.

[–] aliser@lemmy.world 8 points 21 hours ago (1 children)

there are ai detectors, although Im not sure about accuracy of those

[–] Aatube@thriv.social 1 points 2 hours ago
[–] SunlessGameStudios@lemmy.world 41 points 1 day ago* (last edited 1 day ago) (1 children)

I know at least one writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don't really need AI, they need people like him.

load more comments (1 replies)
load more comments
view more: next ›