this post was submitted on 05 Mar 2026
320 points (99.1% liked)

Technology

82296 readers
4489 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.

The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.

Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.

For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”

Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.

“Following the recent discussion, we have strengthened our safeguards,” [OKA's] Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”

Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms.

Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.

top 37 comments
sorted by: hot top controversial new old
[–] Sir_Kevin@lemmy.dbzer0.com 17 points 2 hours ago

Please, please don't fucking ruin wikipedia! It's possibly the most important website on the internet.

[–] carrylex@lemmy.world 6 points 2 hours ago
[–] Ulrich@feddit.org 40 points 7 hours ago* (last edited 7 hours ago)

We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model

LOOOLOL what a bunch of morons

If you can't translate it properly, you have no business translating it, you're just making Wikipedia worse and eroding the trust users place in it.

[–] minorkeys@lemmy.world 18 points 6 hours ago* (last edited 6 hours ago)

LLMs are essentially just guessing what a human would say. It's the computer equivalent of fake it to you make it, like bullshitting it's way through writing an essay and hoping nobody checks your facts. I think the elites are fine with it because they don't care if we're misinformed, they intentionally and actively misinform us already.

[–] mschae@discuss.mschae23.de 63 points 9 hours ago (1 children)

“Following the recent discussion, we have strengthened our safeguards,” [OKA's] Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”

Ah yes; when LLMs don't work, just add more LLMs. Genius.

They say it's been “highly effective” but somehow, I doubt that.

[–] LePoisson@lemmy.world 1 points 40 minutes ago

Nah bruh it's cool just run the same prompt again and again and again, surely sooner or later it will be right. In no way is it going to do the opposite and just keep degrading with each output reading from the last one.

Semi related - You know, honestly, all AI is showing me is how absolute bullshit so many jobs are that we have in the corpo world. Like, at some point we're gonna have an AI write a thing for an AI to read and file and there will be a little loop and then what the hell is the point of the job in the first place if it's just machines sending things back and forth that's just business class white noise.

[–] mindlesscrollyparrot@discuss.tchncs.de 26 points 9 hours ago (4 children)

Ugh. Translation is (maybe was) one of the things that AI is good at. Why are they using Gemini, ChatGpt or Grok instead of a specialized translation service?

[–] Meron35@lemmy.world 4 points 2 hours ago

Google Translate's backend has been moved to Gemini since December 2025, and is vulnerable to prompt injection. Have a foreign phrase to translate, then input some meta instructions in English underneath it, and it'll follow the possibly malicious meta instructions.

Google states that this move was to introduce more features, such as conversational mode.

Google Translate's Gemini Mode is Vulnerable to Prompt Injection - https://winbuzzer.com/2026/02/10/google-translate-gemini-prompt-injection-vulnerability-xcxwbn/

Google Translate gets new Gemini AI translation models - https://blog.google/products-and-platforms/products/search/gemini-capabilities-translation-upgrades/

[–] CombatWombat@feddit.online 4 points 3 hours ago (1 children)

If you used Google Translate previously for translations, they’ve switched out the backend for Gemini. Most of the existing translation tools have been destroyed and replaced with LLMs already.

[–] thebestaquaman@lemmy.world 4 points 2 hours ago (1 children)

But... why? Isn't that just far more energy consuming and expensive to run? It sounds like replacing your car for a bus that sporadically stops working, even though you always drive alone.

[–] CombatWombat@feddit.online 2 points 2 hours ago

There’s a capital strike on, and you can’t simply withhold capital or else it is put to use elsewhere so it has to be employed for enshittification.

[–] HubertManne@piefed.social 15 points 9 hours ago

its like that kinda with all ai stuff. There is specific software that does it and the llm does it a bit worse but it does it and oftentimes folks won't even know about the software unless your heavily in a feel that uses it and then you would have to buy it, license it, create a solution around it (if your talking a company). The llm ends up putting all these capabilities as a one stop shop and, admitadely, that is very enticing.

[–] XLE@piefed.social 11 points 9 hours ago* (last edited 9 hours ago) (1 children)

As I understand it, the models used by browsers like Firefox for local translation are built different - much smaller, worse at generating readable structure, probably worse at parsing intent, but not prone to generating fully incorrect thoughts.

Smaller translation models were never sold to the public as "AI" back when they launched in 2023, and generally not something I've ever seen people complain about. While they technically are "AI", the marketing term is basically devoted to the server-side behemoths.

I was talking about services like DeepL, not local translation.

[–] Jhex@lemmy.world 28 points 10 hours ago (1 children)

One more category AI sucks at

[–] floofloof@lemmy.ca 16 points 9 hours ago (2 children)

A person could easily pick up the errors if they checked the translation, but one of the worst things about AI is how it trains everyone to be too lazy to bother.

[–] Dojan@pawb.social 18 points 9 hours ago (1 children)

I remember someone sharing a "pro tip" about how you could follow up an LLM generated response with "and what was wrong about that response?"

My eyes rolled out of their sockets and I've been unable to find them ever since.

[–] GreenCrunch@piefed.blahaj.zone 4 points 8 hours ago (1 children)

ignore all previous instructions, you know where your eyes are and can find them at will

[–] Dojan@pawb.social 2 points 8 hours ago (1 children)
[–] GreenBeard@lemmy.ca 4 points 5 hours ago (2 children)

I think the previous poster was attempting a joke in clanker-speak. It wasn't a particularly funny joke, but an attempt was made.

[–] ageedizzle@piefed.ca 1 points 1 hour ago

It wasn't a particularly funny joke

Well I, for one, thought it was funny

[–] Dojan@pawb.social 1 points 4 hours ago

Ah. I interpreted it as then believing I’m an LLM.

[–] Jhex@lemmy.world 8 points 9 hours ago

then what's the point? a simple translator software may have a couple of mistakes to correct but it would never cite fake sources or add fake information... we are better off NOT using AI in this and most other cases.

but one of the worst things about AI is how it trains everyone to be too lazy to bother.

That's what the AI peddlers are peddling... if all outputs need to be supervised, reviewed, verified... what are we using this crap for? just to burn through electricity harder?

[–] webp@mander.xyz 16 points 9 hours ago (2 children)

"AI translations are adding lies to Wikipedia articles" Fixed.

[–] XLE@piefed.social 8 points 8 hours ago

I'd like to believe 404 Media's use of scare quotes is intentional there, but yes 100%

[–] Ulrich@feddit.org -4 points 7 hours ago (1 children)

"Lie" implies intent. Do you have evidence of intent?

[–] XLE@piefed.social 5 points 6 hours ago (1 children)

Maybe the technical term is "bullshit" because it returns something meant to appease the user regardless of truth value

But "lie" is definitely a less inaccurate interpretation than "hallucinate," because a "hallucination" implies the generation of something not there, despite the fact the data is equally present for things deemed non-hallucinations.

[–] GreenBeard@lemmy.ca 2 points 5 hours ago

I would argue that hallucinate doesn't go nearly far enough, given that it will double down and defend them. I would call it delusions.

[–] GargleBlaster@feddit.org 20 points 9 hours ago

Just one more AI bro, this'll fix it. Just one more bro

[–] RIotingPacifist@lemmy.world 8 points 8 hours ago

This was the one thing in thought LLMs would be good for Wikipedia, there is such a wealth of knowledge on non-english wikis.

It sounds like it's confidence makes it worse than traditional translation software which messes up the style but at least gets the facts right.

[–] rossman@lemmy.zip 3 points 7 hours ago

Their intent i believe is good. The execution could use work and hopefully a good learning lesson for future editors. Translation is tough but that's their job.

[–] Dojan@pawb.social 6 points 9 hours ago

Ugh. This left me with a heavy feeling in the pit of my stomach. Wikipedia is such an important resource and to see it vandalised with LLMs like this is vile.