Here Is How Amazon Wants to Improve Machine Dubbing
AWS researchers create a way to train an automatic dubbing system based on bi-directional language data. Using phonemes instead of words resulted in better aligned speech and MT.
Language Industry (Artificial) Intelligence — Slator Answers
AWS researchers create a way to train an automatic dubbing system based on bi-directional language data. Using phonemes instead of words resulted in better aligned speech and MT.
Amazon reviewed hundreds of hours of Prime content and found that human dubbers prioritize translation quality and speech naturalness over timing and lip sync.
Team of Amazon AI researchers combines various deep-learning models to build automatic dubbing system. Viewer feedback reveals unnaturalness of speaking rate as main problem.
Over two dozen charts and tables and high-level visualizations as well as analysis of GitHub code repositories relevant to the language industry.
Slator Weekly: Join over 15,500 subscribers and get the latest language industry intelligence every Friday
Tool Box: Join 10,000 subscribers for your monthly linguist technology update.
Your information will not be shared with third parties. No Spam.
This will close in 0 seconds