Results 1 - 10
of
32
All Smiles are Not Created Equal: Morphology and Timing of Smiles Perceived as Amused, Polite, and Embarrassed/Nervous
- J NONVERBAL BEHAV
, 2008
"... We investigated the correspondence between perceived meanings of smiles and their morphological and dynamic characteristics. Morphological characteristics included co-activation of Orbicularis oculi (AU 6), smile controls, mouth opening, amplitude, and asymmetry of amplitude. Dynamic characteristic ..."
Abstract
-
Cited by 50 (15 self)
- Add to MetaCart
We investigated the correspondence between perceived meanings of smiles and their morphological and dynamic characteristics. Morphological characteristics included co-activation of Orbicularis oculi (AU 6), smile controls, mouth opening, amplitude, and asymmetry of amplitude. Dynamic characteristics included duration, onset and offset velocity, asymmetry of velocity, and head movements. Smile characteristics were measured using the Facial Action Coding System (Ekman et al. 2002) and Automated Facial Image Analysis (Cohn and Kanade 2007). Observers judged 122 smiles as amused, embarrassed, nervous, polite, orother. Fifty-three smiles met criteria for classification as perceived amused, embarrassed/nervous, or polite. In comparison with perceived polite, perceived amused more often included AU 6, open mouth, smile controls, larger amplitude, larger maximum onset and offset velocity, and longer duration. In comparison with perceived embarrassed/nervous, perceived amused more often included AU 6, lower maximum offset velocity, and smaller forward head pitch. In comparison with perceived polite, perceived embarrassed/nervous more often included mouth opening and smile controls, larger amplitude, and greater forward head pitch. Occurrence of the AU 6 in perceived embarrassed/nervous and polite smiles questions the assumption that AU 6 with a smile is sufficient to communicate felt enjoyment. By comparing three perceptually distinct types of smiles, we found that perceived smile meanings were related to specific variation in smile morphological and dynamic characteristics.
Emotionally Expressive Head and Body Movement during Gaze
- Shifts,” Intelligent Virtual Agents, LNCS 4722, 2007
"... Abstract. The current state of the art virtual characters fall far short of characters produced by skilled animators. One reason for this is that the physical behaviors of virtual characters do not express the emotions and attitudes of the character adequately. A key deficiency possessed by virtual ..."
Abstract
-
Cited by 19 (6 self)
- Add to MetaCart
(Show Context)
Abstract. The current state of the art virtual characters fall far short of characters produced by skilled animators. One reason for this is that the physical behaviors of virtual characters do not express the emotions and attitudes of the character adequately. A key deficiency possessed by virtual characters is that their gaze behavior is not emotionally expressive. This paper describes work on expressing emotion through head movement and body posture during gaze shifts, with intent to integrate a model of emotionally expressive eye movement into this work in the future. The paper further describes an evaluation showing that users can recognize the emotional states generated by the model. 1
A model of gaze for the purpose of emotional expression in virtual embodied agents
- IN AAMAS
, 2008
"... Currently, state of the art virtual agents lack the ability to display emotion as seen in actual humans, or even in hand-animated characters. One reason for the emotional inexpressiveness of virtual agents is the lack of emotionally expressive gaze manner. For virtual agents to express emotion that ..."
Abstract
-
Cited by 13 (5 self)
- Add to MetaCart
Currently, state of the art virtual agents lack the ability to display emotion as seen in actual humans, or even in hand-animated characters. One reason for the emotional inexpressiveness of virtual agents is the lack of emotionally expressive gaze manner. For virtual agents to express emotion that observers can empathize with, they need to generate gaze- including eye, head, and torso movement- to arbitrary targets, while displaying arbitrary emotional states. Our previous work [18] describes the Gaze Warping Transformation, a method of generating emotionally expressive head and torso movement during gaze shifts that is derived from human movement data. Through an evaluation, it was shown that applying different transformations to the same gaze shift could modify the affective state perceived when the transformed gaze shift was viewed by a human
S.C.: Predicting speaker head nods and the effects of affective information
- IEEE Transactions on Multimedia
, 2010
"... Abstract—During face-to-face conversation, our body is continually in motion, displaying various head, gesture, and posture movements. Based on findings describing the communicative functions served by these nonverbal behaviors, many virtual agent systems have modeled them to make the virtual agent ..."
Abstract
-
Cited by 8 (5 self)
- Add to MetaCart
(Show Context)
Abstract—During face-to-face conversation, our body is continually in motion, displaying various head, gesture, and posture movements. Based on findings describing the communicative functions served by these nonverbal behaviors, many virtual agent systems have modeled them to make the virtual agent look more effective and believable. One channel of nonverbal behaviors that has received less attention is head movements, despite the important functions served by them. The goal for this work is to build a domain-independent model of speaker’s head movements that could be used to generate head movements for virtual agents. In this paper, we present a machine learning approach for learning models of head movements by focusing on when speaker head nods should occur, and conduct evaluation studies that compare the nods generated by this work to our previous approach of using handcrafted rules [1]. To learn patterns of speaker head nods, we use a gesture corpus and rely on the linguistic and affective features of the utterance. We describe the feature selection process and training process for learning hidden Markov models and compare the results of the learned models under varying conditions. The results show that we can predict speaker head nods with high precision (.84) and recall (.89) rates, even without a deep representation of the surface text and that using affective information can help improve the prediction of the head nods (precision:.89, recall:.90). The evaluation study shows that the nods generated by the machine learning approach are perceived to be more natural in terms of nod timing than the nods generated by the rule-based approach. Index Terms—Embodied conversational agents, emotion, head nods, machine learning, nonverbal behaviors, virtual agents. I.
Bossy or wimpy: expressing social dominance by combining gaze and linguistic behaviors
- In Intelligent Virtual Agents
, 2010
"... Abstract. This paper examines the interaction of verbal and nonverbal information for conveying social dominance in intelligent virtual agents (IVAs). We expect expressing social dominance to be useful in applications related to persuasion and motivation; here we simply test whether we can affect us ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
(Show Context)
Abstract. This paper examines the interaction of verbal and nonverbal information for conveying social dominance in intelligent virtual agents (IVAs). We expect expressing social dominance to be useful in applications related to persuasion and motivation; here we simply test whether we can affect users ’ perceptions of social dominance using procedurally generated conversational behavior. Our results replicate previous results showing that gaze behaviors affect dominance perceptions, as well as providing new results showing that, in our experiments, the linguistic expression of disagreeableness has a significant effect on dominance perceptions, but that extraversion does not.
Generating nonverbal signals for a sensitive artificial listener
- in Verbal and Nonverbal Communication Behaviours
, 2007
"... Abstract. In the Sensitive Artificial Listener project research is performed with the aim to design an embodied agent that not only generates the appropriate nonverbal behaviors that accompany speech, but that also displays verbal and nonverbal behaviors during the production of speech by its conver ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
Abstract. In the Sensitive Artificial Listener project research is performed with the aim to design an embodied agent that not only generates the appropriate nonverbal behaviors that accompany speech, but that also displays verbal and nonverbal behaviors during the production of speech by its conversational partner. Apart from many applications for embodied agents where natural interaction between agent and human partner also require this behavior, the results of this project are also meant to play a role in research on emotional behavior during conversations. In this paper, our research and implementation efforts in this project are discussed and illustrated with examples of experiments, research approaches and interfaces in development. 1
Learning models of speaker head nods with affective information
- 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops
, 2009
"... During face-to-face conversation, the speaker’s head is continually in motion. These movements serve a variety of important communicative functions, and may also be influenced by our emotions. The goal for this work is to build a domain-independent model of speaker’s head movements and investigate t ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
(Show Context)
During face-to-face conversation, the speaker’s head is continually in motion. These movements serve a variety of important communicative functions, and may also be influenced by our emotions. The goal for this work is to build a domain-independent model of speaker’s head movements and investigate the effect of using affective information during the learning process. Once the model is learned, it can later be used to generate head movements for virtual agents. In this paper, we describe our machine-learning approach to predict speaker’s head nods using an annotated corpora of face-to-face human interaction and emotion labels generated by an affect recognition model. We describe the feature selection process, training process, and the comparison of results of the learned models under varying conditions. The results show that using affective information can help predict head nods better than when no affective information is used. 1.
Knowing who's boss: fMRI and ERP investigations of social dominance perception
- Group Processes & Intergroup Relations
, 2008
"... Humans use facial cues to convey social dominance and submission. Despite the evolutionary importance of this social ability, how the brain recognizes social dominance from the face is unknown. We used event-related brain potentials (ERP) and functional magnetic resonance imaging (fMRI) to examine t ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
(Show Context)
Humans use facial cues to convey social dominance and submission. Despite the evolutionary importance of this social ability, how the brain recognizes social dominance from the face is unknown. We used event-related brain potentials (ERP) and functional magnetic resonance imaging (fMRI) to examine the neural mechanisms underlying social dominance perception from facial cues. Participants made gender judgments while viewing aggression-related facial expressions as well as facial postures conveying dominance or submission. ERP evidence indicates that the perception of dominance from aggression-related emotional expressions occurs early in neural processing while the perception of social dominance from facial postures arises later. Brain imaging results show that activity in the fusiform gyrus, superior temporal gyrus and lingual gyrus, is associated with the perception of social dominance from facial postures and the magnitude of neural response in these regions differentiates between perceived dominance and perceived submissiveness.
The relation between gaze behavior and the attribution of emotion: An empirical study. In: Intelligent Virtual Agents
, 2008
"... Abstract. Real-time virtual humans are less believable than hand-animated characters, particularly in the way they perform gaze. In this paper, we provide the results of an empirical study that explores an observer’s attribution of emotional state to gaze. We have taken a set of low-level gaze behav ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Real-time virtual humans are less believable than hand-animated characters, particularly in the way they perform gaze. In this paper, we provide the results of an empirical study that explores an observer’s attribution of emotional state to gaze. We have taken a set of low-level gaze behaviors culled from the nonverbal behavior literature; combined these behaviors based on a dimensional model of emotion; and then generated animations of these behaviors using our gaze model based on the Gaze Warping Transformation (GWT) [9], [10]. Then, subjects judged the animations displaying these behaviors. The results, while preliminary, demonstrate that the emotional state attributed to gaze behaviors can be predicted using a dimensional model of emotion; and show the utility of the GWT gaze model in performing bottom-up behavior studies.
Relations between Facial Display, Eye Gaze and Head Tilt: Dominance Perception Variations of Virtual Agents
"... In this paper, we focus on facial displays, eye gaze and head tilts to express social dominance. In particular, we are interested in the interaction of different non-verbal cues. We present a study which systematically varies eye gaze and head tilts for five basic emotions and a neutral state using ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
In this paper, we focus on facial displays, eye gaze and head tilts to express social dominance. In particular, we are interested in the interaction of different non-verbal cues. We present a study which systematically varies eye gaze and head tilts for five basic emotions and a neutral state using our own graphics and animation engine. The resulting images are then presented to a large number of subjects via a web-based interface who are asked to attribute dominance values to the character shown in the images. First, we analyze how dominance ratings are influenced by the conveyed emotional facial expression. Further, we investigate how gaze direction and head pose influence dominance perception depending on the displayed emotional state. 1