Mood

A smart subtitle system for emotional expression in mixed-language communication

MOOD is a real-time subtitle enhancer that detects emotional tone in spoken language and transforms it into visual cues.
It’s built for Deaf users to better understand how something is said, not just what is said—bridging the emotional gap often lost in plain captions.

Audio Input:

Detect tone of voice, volume shifts, and speech rhythm

Recognize emotional signals like excitement, calmness, sarcasm, or frustration

Designed to work with live conversation or pre-recorded voice content

Core Technology:

Using audio analysis and voice emotion AI, MOOD interprets vocal tone in real time.Subtitles are then visually modified—through color, weight, or animation—to express emotional intent.

Rethinking Subtitles:

Subtitles often miss the emotion.MOOD brings that emotion back—by letting tone speak visually.

Process / Research / Prototypes

Research

Most existing subtitle systems focus solely on transcribing words. But in mixed-language conversations—especially between Deaf and hearing individuals—emotional tone is often lost.Through interviews with Deaf viewers, we learned how challenging it can be to distinguish a joke from sarcasm, or calmness from passive aggression—when tone is invisible.This inspired us to ask:

Can subtitles carry emotion, not just words?

View Full Research ArchiveThesis Blog

Key Insight

“When hearing people say ‘fine,’ I don’t know if they’re happy, annoyed, or just tired. It all sounds the same in text.”
“Deaf individuals often extract emotions by observing facial expressions from other Deaf people. However, hearing individuals typically do not rely heavily on facial expressions to convey their emotions.”
- Chi Deaf Interpreter Movie Sign Language Guidance.

How Might We

How might we bridge emotional gaps between Deaf and hearing communities? How might we make emotions more visible and mutual, beyond words, sounds, or spoken tone?

Process/Prototypes

Based on Russell’s emotion model, I used different colors to represent the four quadrants, showing changes in valence and intensity. I also chose an easy-to-read font and added animation effects that match each emotion quadrant.
Even though AI emotion tools are advanced, they rarely give visual or accessible results that work well for Deaf communication.
On the homepage, users can adjust the size of the animated subtitles to make them more comfortable and less distracting during conversations.

Application

As the AI detects emotional shifts in the speaker’s tone, the curve reacts—rising with intensity, flattening with calm.This helps users spot emotional highs, shifts, or sudden tone changes at a glance, even before reading the subtitles.
After the chat, the app shows a text summary. Users can tap parts of the conversation to see emotional feedback and mood changes.
The most brutalist and efficient library
A Webflow library infused with the brutalist way
Just drag, drop and make your first MRR faster
Assets for Webflow builders.