Subtitles have been around since the early ’70s with over 7.5 million users in the UK alone(Ofcom, 2006). Essential for people with hearing impairment and foreign language movies, and scientifically shown to promote reading comprehension and retention, subtitles have only recently become essential for many TV watchers. According to a stream of online conversations it’s the only way to watch. One Redditor writes in r/movies, “I always watch with captions on. Native English speaker, not hard of hearing, have just discovered over the years you can miss SO much when they’re off!” Almost everyone responds supportively, including this person: “I’m a native English speaker and without subtitles I miss so much dialogue. Some actors can be more difficult to understand than others”
However, a significant limitation of the present generation of subtitling techniques and technologies is their failure to express the emotional nuances of dialogue such as intonation and volume. Subtitles have been criticised for their lack of emotional content and inability to communicate the subtext of a rich dialogue. At present subtitles are limited to telling the viewer what is being said but not how it is being said. Often those that rely upon subtitles, such as the Deaf and Hard of Hearing, draw attention to the “emotional gap” that is generated, and the important emotional information that is lost.
For the Deaf and Hearing Impaired viewers, subtitles provide only a limited communication representation. Although viewers can read the words that are said, they cannot determine how something has been said. For example, a character could say the phrase, “I will be there is a minute” in a menacing way or in a joyful way; yet in either case the subtitle is exactly the same. Similarly, whether the character says the line very quietly, or shouts it at the top of their voice, generally there is no difference in the way the subtitled phrase is displayed. The limitations of subtitles are further exaggerated when characters cannot be seen on screen, during action sequences, and when multiple characters are speaking at the same time.
Other reasons why people watch TV with closed captions on, despite having good hearing abilities and not being constrained by having to watch ‘foreign’ language videos, are manifold and go far beyond those two most commonly anticipated use cases. Examples include watching videos in public spaces; while commuting; in a bank; in sports bars where there’s several TVs playing; to follow complex dialog in difficult accents; to learn the language; or in their house where family members may be using multiple displays simultaneously, ex. tablet, TV
emobus is here to add a rich level of missing information that can make viewing of a video or film, fun and engaging. It captures the emotions from the audio and presents them, within the subtitles. Know when an actor is speaking loud or slow, when she is angry, sad or happy and the intensity of their feelings. Experience a completely new way of watching TV, movies and videos.
Sample video:
See it in action
Check out one of the samples by pressing the button below:
Emilia CLARKE, from the HBO hit series #GOT, or
Albert Brooks, the voice of Dory in Finding Dory, or
Vicki, our marketing lead, trying to potty train Kahlua!
Or upload your own short video to see it in action. Shoot a 15 second video with your mobile, upload the mp4 file, wait a bit and you’ll get your own set of emotionally rich transcriptions.
Legend reference:
slightly_sad: (⊙︿⊙✿)
sad: (⌣⌣)
very_sad: (⌣⌣“)
slightly_happy: (。●́‿●̀。)
happy: (๑‵●‿●‵๑)
very_happy: ヽ(•‿•)ノ
slightly_angry: (ಠ ∩ಠ)
angry: ←(ಠಠ)→
very_angry”: (•̀o•́)ง
calm: (ツ)/¯
agitated: ┌( ಠ_ಠ)┘
Do you have questions? Want to discuss integration with your videos? Email us at hello@emosubs.com