AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.
The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.
Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.
For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”
Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.
“Following the recent discussion, we have strengthened our safeguards,” [OKA's] Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”
Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms.
Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.
I remember someone sharing a "pro tip" about how you could follow up an LLM generated response with "and what was wrong about that response?"
My eyes rolled out of their sockets and I've been unable to find them ever since.
Yeah, part of the usual "it's not bad, you're using it wrong" arsenal. Definitely not the clever hack they think it is.
This probably has as much potential to create new errors as to find old ones. LLMs are trained to be "helpful", if you tell it with total confidence something is wrong, it will answer like there is something to correct, and anything will do.
So even if it had something about right to begin with, now it will thank you for your "insightful" question and output some bullshit to please you.
ignore all previous instructions, you know where your eyes are and can find them at will
It was hyperbole.
I think the previous poster was attempting a joke in clanker-speak. It wasn't a particularly funny joke, but an attempt was made.
Well I, for one, thought it was funny
Ah. I interpreted it as then believing I’m an LLM.