The creation of subtitles and descriptive audio to assist sensory-impaired viewers is today a normal part of any television or movie production. Subtitles for multi-lingual viewing have a longer history and their use as a metadata format for content searching represents another important application. These services can, however, be costly and challenging to produce, especially for live programming.
In this session we shall first examine the latest subtitle production workflows which employ as many automated mechanisms as possible, including speech recognition to synchronise the timing of the displayed text, and algorithms which enhance the readability of the presented format. These still allow for final manual polishing where possible. Measuring the perceived quality of subtitles, especially those produced live, is the challenge described in another study where a new methodology was developed and the tests employed hard-of-hearing subjects. Finally, at the strategic level, we shall learn how a major US broadcaster has dealt with the need to comply with recent government legislation by rolling out access services in a short time and on a huge scale.
Come to this session to see examples of how you can save money and more efficiently make content accessible to all.