Results 1 - 10
of
105
Chat Circles
, 1999
"... Although current online chat environments provide new opportunities for communication, they are quite constrained in their ability to convey many important pieces of social information, ranging from the number of participants in a conversation to the subtle nuances of expression that enrich face to ..."
Abstract
-
Cited by 145 (11 self)
- Add to MetaCart
Although current online chat environments provide new opportunities for communication, they are quite constrained in their ability to convey many important pieces of social information, ranging from the number of participants in a conversation to the subtle nuances of expression that enrich face to face speech. In this paper we present Chat Circles, an abstract graphical interface for synchronous conversation. Here, presence and activity are made manifest by changes in color and form, proximity-based filtering intuitively breaks large groups into conversational clusters, and the archives of a conversation are made visible through an integrated history interface. Our goal in this work is to create a richer environment for online discussions. Keywords chatroom, conversation, social visualization, turn-taking, graphical history, Internet, World Wide Web
PeopleGarden: Creating Data Portraits for Users
- Proceedings of the 12th annual ACM symposium on User interface software and technology (UIST '99). ACM Press
, 1999
"... Many on-line interaction environments have a large number of users. It is difficult for the participants, especially new ones, to form a clear mental image about those with whom they are interacting. How can we compactly convey information about these participants to each other? We propose the data ..."
Abstract
-
Cited by 83 (5 self)
- Add to MetaCart
Many on-line interaction environments have a large number of users. It is difficult for the participants, especially new ones, to form a clear mental image about those with whom they are interacting. How can we compactly convey information about these participants to each other? We propose the data portrait, a novel graphical representation of users based on their past interactions. Data portraits can inform users about each other and the overall social environment. We use a flower metaphor for creating individual data portraits, and a garden metaphor for combining these portraits to represent an on-line environment. We will review previous work in visualizing both individuals and groups. We will then describe our visualizations, explain how to create them, and show how they can be used to address user questions. KEYWORDS: Information visualization, data portraits, user-centered visualization, interaction context.
Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous
- In Autonomous Agents and Multi-Agent Systems
, 1999
"... Abstract: Although avatars may resemble communicative interface agents, they have for the most part not profited from recent research into autonomous embodied conversational systems. In particular, even though avatars function within conversational environments (for example, chat or games), and eve ..."
Abstract
-
Cited by 77 (7 self)
- Add to MetaCart
(Show Context)
Abstract: Although avatars may resemble communicative interface agents, they have for the most part not profited from recent research into autonomous embodied conversational systems. In particular, even though avatars function within conversational environments (for example, chat or games), and even though they often resemble humans (with a head, hands, and a body) they are incapable of representing the kinds of knowledge that humans have about how to use the body during communication. Humans, however, do make extensive use of the visual channel for interaction management where many subtle and even involuntary cues are read from stance, gaze and gesture. We argue that the modeling and animation of such fundamental behavior is crucial for the credibility and effectiveness of the virtual interaction in chat. By treating the avatar as a communicative agent, we propose a method to automate the animation of important communicative behavior, deriving from work in conversation and discourse theory. BodyChat is a system that allows users to communicate via text while their avatars automatically animate attention, salutations, turn taking, back-channel feedback and facial expression. An evaluation shows that users found an avatar with autonomous conversational behaviors to be more natural than avatars whose behaviors they controlled, and to increase the perceived expressiveness of the conversation. Interestingly, users also felt that avatars with autonomous communicative behaviors provided a greater sense of user control.
Performance-Driven Hand-Drawn Animation
, 2000
"... We present a novel method for generating performance-driven, "hand-drawn" animation in real-time. Given an annotated set of hand-drawn faces for various expressions, our algorithm performs multi-way morphs to generate real-time animation that mimics the expressions of a user. Our system co ..."
Abstract
-
Cited by 44 (2 self)
- Add to MetaCart
We present a novel method for generating performance-driven, "hand-drawn" animation in real-time. Given an annotated set of hand-drawn faces for various expressions, our algorithm performs multi-way morphs to generate real-time animation that mimics the expressions of a user. Our system consists of a vision-based tracking component and a rendering component. Together, they form an animation system that can be used in a variety of applications, including teleconferencing, multi-user virtual worlds, compressed instructional videos, and consumer-oriented animation kits. This paper describes our algorithms in detail and illustrates the potential for this work in a teleconferencing application. Experience with our implementation suggests that there are several advantages to our hand-drawn characters over other alternatives: (1) flexibility of animation style; (2) increased compression of expression information; and (3) masking of errors made by the face tracking system that are distracting...
Living Hand to Mouth: Psychological Theories about Speech and Gesture in Interactive Dialogue Systems
- In AAAI99 Fall Symposium on Psychological Models of Communication in Collaborative Systems
, 1999
"... In this paper we discuss the application of aspects of a psychological theory about the relationship between speech and gesture to the implementation of interactive dialogue systems. We first lay out some uncontroversial facts about the interaction of speech and gesture in conversation and descr ..."
Abstract
-
Cited by 44 (1 self)
- Add to MetaCart
(Show Context)
In this paper we discuss the application of aspects of a psychological theory about the relationship between speech and gesture to the implementation of interactive dialogue systems. We first lay out some uncontroversial facts about the interaction of speech and gesture in conversation and describe some psychological theories put forth to explain those data, settling on one theory as being the most interesting for interactive dialogue systems. We then lay out our implementation of an interactive dialogue system that is informed by the theory--- concentrating on two particular claims of the theory: that gesture and speech reflect a common conceptual source; and that the content and form of gesture is tuned to the communicative context and the actor's communicative intentions. We compare our work to some other similar interactive systems, and conclude with some thoughts about how both implementation and theory can benefit from this kind of close partnership. Epigraph Pan...
Video Object Annotation, Navigation, and Composition
"... We explore the use of tracked 2D object motion to enable novel approaches to interacting with video. These include moving annotations, video navigation by direct manipulation of objects, and creating an image composite from multiple video frames. Features in the video are automatically tracked and g ..."
Abstract
-
Cited by 43 (1 self)
- Add to MetaCart
(Show Context)
We explore the use of tracked 2D object motion to enable novel approaches to interacting with video. These include moving annotations, video navigation by direct manipulation of objects, and creating an image composite from multiple video frames. Features in the video are automatically tracked and grouped in an off-line preprocess that enables later interactive manipulation. Examples of annotations include speech and thought balloons, video graffiti, path arrows, video hyperlinks, and schematic storyboards. We also demonstrate a direct-manipulation interface for random frame access using spatial constraints, and a drag-and-drop interface for assembling still images from videos. Taken together, our tools can be employed in a variety of applications including film and video editing, visual tagging, and authoring rich media such as hyperlinked video. ACM Classification: H5.2 [Information interfaces and presentation]:
Real-Time Virtual Humans
- In Proc. 5th Pacific Conference on Computer Graphics and Applications
, 1997
"... endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution m ..."
Abstract
-
Cited by 35 (1 self)
- Add to MetaCart
(Show Context)
endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
Building Expression into Virtual Characters
- Proc. Eurographics ’06, State of the Art Report (STAR
, 2006
"... Virtual characters are an important part of many 3D graphical simulations. In entertainment or training applications, virtual characters might be one of the main mechanisms for creating and developing content and scenarios. In such applications the user may need to interact with a number of differen ..."
Abstract
-
Cited by 25 (2 self)
- Add to MetaCart
(Show Context)
Virtual characters are an important part of many 3D graphical simulations. In entertainment or training applications, virtual characters might be one of the main mechanisms for creating and developing content and scenarios. In such applications the user may need to interact with a number of different characters that need to invoke specific responses in the user, so that the user interprets the scenario in the way that the designer intended. Whilst representations of virtual characters have come a long way in recent years, interactive virtual characters tend to be a bit “wooden ” with respect to their perceived behaviour. In this STAR we give an overview of work on expressive virtual characters. In particular, we assume that a virtual character representation is already available, and we describe a variety of models and methods that are used to give the characters more “depth ” so that they are less wooden and more plausible. We cover models of individual characters ’ emotion and personality, models of interpersonal behaviour and methods for generating expression. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Virtual Reality 1.
Virtual humans for animation, ergonomics and simulation
- Proc. Pacific Graphics ’97
, 1997
"... endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution m ..."
Abstract
-
Cited by 23 (2 self)
- Add to MetaCart
(Show Context)
endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.