this post was submitted on 30 Nov 2025
111 points (96.6% liked)

Anime

3753 readers
35 users here now

This community is the place to discuss and ask questions about anime, anime news, and related topics.

Currently airing show discussion threads are created by our resident bot, rikka@ani.social. If it doesn't make a thread for an episode that you want to discuss, see the user guide on the wiki for instructions on how to ask rikka to make a thread for you to use.

Check out our wiki to find:

Rules

More complete rules on the wiki.

Post Tags

Post tags are completely optional, but some recommended tags would include:

Related General Communities

Discord

Thanks to @NineSwords@ani.social for running the discord!


rikka

founded 2 years ago
MODERATORS
 

They are also AI dubbing show that already have a dub: https://xcancel.com/Pikagreg/status/1994654475089555599

you are viewing a single comment's thread
view the rest of the comments
[–] mo_lave@reddthat.com 0 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

According to the beta testers, and the Internet, listeners abhorred the LLM localization & actual tone-deaf Speech audio dubbing. Keeping the original dubbings is simply what folks want, esp. if it’s labeled abridged.

Yes, at its current state. Will it stay that way? The tech companies are burning cash in attempts to make it not so. My hunch says even Vocaloid-tier AI dubbing will be enough for a large sector of the audience. Then the human vs. AI dubbing debate could be analogous to debates between lossy (more accessible) vs. lossless (higher quality) audio.

Now, LLM localization is the greater challenge. I highly doubt those, including the classic machine-learning models, can reach N1-level localization quality.

[–] AntiBullyRanger@ani.social 3 points 3 weeks ago (1 children)

The only thing funny about mentioning Vocaloid is the fact that Vocaloid synthesis has to be manually pitched, tempod, and toned🤣. Glad you honestly believe capitalists want to invest more on disqualifying tone deafening pitchless speech waveforms.

But please, never stop supporting espeak!

[–] mo_lave@reddthat.com 2 points 3 weeks ago

espeaks looks pretty cool. Thanks for sharing.

[–] Susaga@sh.itjust.works 1 points 3 weeks ago (1 children)

It's amusing to me how long people have been saying "yes, AI is crap, but it might not be crap some day, so just you wait!" Despite all the money tech companies have thrown at AI, it's still as crap as it ever was, and I don't see any reason to think it'll get better.

Meanwhile, Crunchyroll doesn't care if it's crap, so long as they can get around the cost of paying humans (which is another can of worms). If they're willing to buy this level of quality, what incentive is there for quality to improve?

[–] mo_lave@reddthat.com 0 points 3 weeks ago (1 children)

yes, AI is crap, but it might not be crap some day, so just you wait!”

I mean, there's a gap between the capabilities of Cleverbot and ChatGPT, as referenced in this very comments section. As much as one wishes it not be so, it would be foolish to ignore past technological leaps—and how people back then laugh them off as impossible.

[–] Susaga@sh.itjust.works 1 points 3 weeks ago (1 children)

I don't see any significant differences between ChatGPT and Cleverbot, if I'm honest. It might have a wider array of responses to pick between, but it's still making the same mistakes.

It would be foolish to ignore past tech bubbles, and how people back then claimed they'd fix all their problems in the near future and you need to jump on now or you won't survive (and how none of them survived).

[–] mo_lave@reddthat.com 0 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Unlike Cleverbot, you can add your project-specific context in ChatGPT. That was extremely helpful in my creative writing process as I use it as a virtual assistant.

It would be foolish to ignore past tech bubbles, and how people back then claimed they’d fix all their problems in the near future and you need to jump on now or you won’t survive (and how none of them survived).

While largely true, that none of them survived is false. Amazon is a survivor of the dotcom bubble. Pets.com died, but Chewy perfected the concept later on. Circling back to the topic, if/when the bubble bursts, we could be talking about 90% of the AI-centric companies going under, give a decade or so, a "stabilized" form of AI dubbing could resurface and establish a long-lasting presence.

[–] Unboxious@ani.social 1 points 3 weeks ago* (last edited 3 weeks ago)

Now, LLM localization is the greater challenge. I highly doubt those, including the classic machine-learning models, can reach N1-level localization quality.

There's no chance it's happening any time soon. Many manga and anime lean heavily on visual context as well as the context of the story in general to clear up situations where the language would otherwise be ambiguous, so until the translation software can also use all of that context it's basically impossible.