@MISC{Jou_yahoolabs, author = {Brendan Jou and Tao Chen and Miriam Redi and Mercan Topkara and Shih-fu Chang}, title = {Yahoo Labs}, year = {} }
Share
OpenURL
Abstract
Every culture and language is unique. Our work expressly focuses on the uniqueness of culture and language in re-lation to human affect, specifically sentiment and emotion semantics, and how they manifest in social multimedia. We develop sets of sentiment- and emotion-polarized visual con-cepts by adapting semantic structures called adjective-noun pairs, originally introduced by Borth et al. [5], but in a mul-tilingual context. We propose a new language-dependent method for automatic discovery of these adjective-noun con-structs. We show how this pipeline can be applied on a social multimedia platform for the creation of a large-scale multi-lingual visual sentiment concept ontology (MVSO). Unlike the flat structure in [5], our unified ontology is organized hierarchically by multilingual clusters of visually detectable nouns and subclusters of emotionally biased versions of these nouns. In addition, we present an image-based prediction task to show how generalizable language-specific models are in a multilingual context. A new, publicly available dataset of>15.6K sentiment-biased visual concepts across 12 lan-guages with language-specific detector banks,>7.36M im-ages and their metadata is also released.