AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.
The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.
Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.
For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”
Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.
“Following the recent discussion, we have strengthened our safeguards,” [OKA's] Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”
Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms.
Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.
I mean, you can test it yourself if you speak more than one language. If you ask for a direct translation and stress not to add content or change the text, it will do a very good job. Translation is a use case where LLM really shine.
I feel like this sub became "technology bad". Nobody wants to think and would rather just dogpile.
There's a huge difference between "Creates intelligible single-use text that's good enough that I can understand what the text is roughly about" and "Creates text at a quality high enough to work as a quotable source".
For the first use case, infrequent hallucinations are no problem. I read it, if I understand a bit about the topic I might catch it, if not it probably doesn't matter too much either. Especially if it's about non-critical topics.
For the second use case, infrequent hallucinations are a massive problem. Most people who use Wikipedia use it like a primary source. Even though sources are linked, they don't go hunting for sources but instead rely that the information in the article is accurate. Every article is read not only once by one person, but thousands or hundreds of thousands of times. That means every single line is read and believed. You can bet that if there's a hallucination in there, someone will read it and believe it. That's requires a completely different level of accuracy, and doing that kind of crap translation work on such a large scale as OKA is a massive disservice.
That's why I specify that everything should be verified in a later comment. My point is that LLMs when properly guided are better than other automatic translation service, while hallucination can easily be avoided with proper prompting.
Also worth mentioning that there's massive difference in user generated translations already, some of it is well meaning while other, like in Israel's case, isn't.
I translate a lot of stuff for my work, and I don't have any problems when I instruct it properly. I'm also there to verify. I don't have to deal with hallucinations ever, mostly just changing a word or two because I don't like how it sounds (it uses overly complex words at times).
This is more about certain users being shit and either not checking their work or doing work they have no place doing. They would exist no matter what they use, it's not the tools fault.
Tbh, I work in research and we would never use Wikipedia for anything. We can't quote it and anytime I find a good tidbit on it and try and find the source, I usually get dead link or just something altogether false which doesn't represent what the user wrote. Probably highly dependent on the subject though but the sourcing isn't very rigorous.
Bless them though, it's an amazing site and they are still doing a stellar job considering how big it is.
Does it leave out hallucinations 100% of the time? Because otherwise why not use non-LLM translation services (which also alone don't actually meet the standards for articles iirc).
Whatever is used, I think nothing is going to be 100% and everything should be verified by a native speaker. It is Wikipedia afterall, not some blog.
Non-LLM services are worse in my opinion but it probably depends on the language (LLMs probably struggle with certain languages as well).