The W3C just completed the first draft of the Emotion Markup Language (EmotionML 1.0).
Use cases for EmotionML can be grouped into three broad types:
- Manual annotation of material involving emotionality, such as annotation of videos, of speech recordings, of faces, of texts, etc;
- Automatic recognition of emotions from sensors, including physiological sensors, speech recordings, facial expressions, etc., as well as from multi-modal combinations of sensors;
- Generation of emotion-related system responses, which may involve reasoning about the emotional implications of events, emotional prosody in synthetic speech, facial expressions and gestures of embodied agents or robots, the choice of music and colors of lighting in a room, etc.
I love crap like this!