Results 1 - 10
of
367
Face Transfer with Multilinear Models
- TO APPEAR IN SIGGRAPH 2005
, 2005
"... Face Transfer is a method for mapping videorecorded performances of one individual to facial animations of another. It extracts visemes (speech-related mouth articulations), expressions, and three-dimensional (3D) pose from monocular video or film footage. These parameters are then used to generate ..."
Abstract
-
Cited by 145 (3 self)
- Add to MetaCart
Face Transfer is a method for mapping videorecorded performances of one individual to facial animations of another. It extracts visemes (speech-related mouth articulations), expressions, and three-dimensional (3D) pose from monocular video or film footage. These parameters are then used to generate and drive a detailed 3D textured face mesh for a target identity, which can be seamlessly rendered back into target footage. The underlying face model automatically adjusts for how the target performs facial expressions and visemes. The performance data can be easily edited to change the visemes, expressions, pose, or even the identity of the target—the attributes are separably controllable. This supports
Embodied Agents for Multi-party Dialogue in Immersive Virtual Worlds
, 2001
"... We present a model of dialogue for embodied virtual agents that can communicate with multiple (human and virtual) agents in a multi-modal setting, including face-to-face spoken and nonverbal, as well as radio interaction, spanning multiple conversations in support of an extended complex task. ..."
Abstract
-
Cited by 121 (17 self)
- Add to MetaCart
We present a model of dialogue for embodied virtual agents that can communicate with multiple (human and virtual) agents in a multi-modal setting, including face-to-face spoken and nonverbal, as well as radio interaction, spanning multiple conversations in support of an extended complex task.
Intelligent tutoring systems with conversational dialogue
- AI Magazine
, 2001
"... This article presents the tutoring systems that we have been developing. AUTOTUTOR is a conversational agent, with a talking head, that helps college students learn about computer literacy. ANDES, ATLAS, AND WHY2 help adults learn about physics. Instead of being mere information-delivery systems, ou ..."
Abstract
-
Cited by 108 (28 self)
- Add to MetaCart
This article presents the tutoring systems that we have been developing. AUTOTUTOR is a conversational agent, with a talking head, that helps college students learn about computer literacy. ANDES, ATLAS, AND WHY2 help adults learn about physics. Instead of being mere information-delivery systems, our systems help students actively construct knowledge through conversations
Interactive pedagogical drama
- In Proceedings of the International Conference on Autonomous Agents
, 2000
"... This paper describes an agent-based approach to realizing interactive pedagogical drama. Characters choose their actions autonomously, while director and cinematographer agents manage the action and its presentation in order to maintain story structure, achieve pedagogical goals, and present the dyn ..."
Abstract
-
Cited by 106 (17 self)
- Add to MetaCart
(Show Context)
This paper describes an agent-based approach to realizing interactive pedagogical drama. Characters choose their actions autonomously, while director and cinematographer agents manage the action and its presentation in order to maintain story structure, achieve pedagogical goals, and present the dynamic story to as to achieve the best dramatic effect. Artistic standards must be maintained while permitting substantial variability in story scenario. To achieve these objectives, scripted dialog is deconstructed into elements that are portrayed by agents with emotion models. Learners influence how the drama unfolds by controlling the intentions of one or more characters, who then behave in accordance with those intentions. Interactions between characters create opportunities to move the story in pedagogically useful directions, which the automated director exploits. This approach is realized in the multimedia title Carmen’s Bright IDEAS, an interactive health intervention designed to improve the problem solving skills of mothers of pediatric cancer patients. Keywords Believability; communication, collaboration, and interaction of humans and agents; lifelike qualities; modeling the behavior of other agents; models of emotion, motivation, or personality; synthetic agents 1.
Explorations in engagement for humans and robots
- ARTIFICIAL INTELLIGENCE
, 2004
"... This paper explores the concept of engagement, the process by which individuals in an interaction start, maintain and end their perceived connection to one another. The paper reports on one aspect of engagement among human interactors---the effects of tracking faces during an interaction. It also ..."
Abstract
-
Cited by 93 (18 self)
- Add to MetaCart
(Show Context)
This paper explores the concept of engagement, the process by which individuals in an interaction start, maintain and end their perceived connection to one another. The paper reports on one aspect of engagement among human interactors---the effects of tracking faces during an interaction. It also provides details for an architecture of a robot that can participate in conversational, collaborative interactions with engagement gestures. Finally the paper reports on findings of the effects on human participants who interacted with a robot when it either performed or did not perform engagement gestures. Results of the human-robot studies indicate that people direct their attention to the robot more often in interactions where these gestures are present, and they find these gestures more appropriate than when they are not present.
Toward a new generation of virtual humans for interactive experiences
- Intelligent Systems, IEEE
, 2002
"... Imagine yourself as a young lieutenant in the US Army on your first peacekeeping mis-sion. You must help another group, called Eagle1-6, inspect a suspected weapons cache. You arrive at a rendezvous point, anxious to proceed with the mission, only to see your platoon sergeant looking upset as smoke ..."
Abstract
-
Cited by 89 (44 self)
- Add to MetaCart
Imagine yourself as a young lieutenant in the US Army on your first peacekeeping mis-sion. You must help another group, called Eagle1-6, inspect a suspected weapons cache. You arrive at a rendezvous point, anxious to proceed with the mission, only to see your platoon sergeant looking upset as smoke rises from one of your platoon’s vehicles and a civilian car. A seriously injured child lies on the ground, surrounded by a distraught woman and a medic from your team. You ask what happened and your sergeant reacts defensively. He casts an angry glance at the mother and says, “They rammed into us, sir. They just shot out from the side street and our driver couldn’t see them. ” Before you can think, an urgent radio call breaks in: “This is Eagle1-6. Where are you guys? Things are heating up. We need you here now! ” From the side street, a CNN camera team appears. What do you do now, lieutenant? Interactive virtual worlds provide a powerful
Creating Interactive Virtual Humans: Some Assembly Required
- IEEE INTELLIGENT SYSTEMS
, 2002
"... ..."
Task-Oriented Collaboration with Embodied Agents in Virtual Worlds
, 2000
"... We are working toward animated agents that can collaborate with human students in virtual worlds. The agent's objective is to help students learn to perform physical, procedural tasks, such as operating and maintaining equipment. Like most of the previous research on task-oriented dialogues, ..."
Abstract
-
Cited by 86 (15 self)
- Add to MetaCart
We are working toward animated agents that can collaborate with human students in virtual worlds. The agent's objective is to help students learn to perform physical, procedural tasks, such as operating and maintaining equipment. Like most of the previous research on task-oriented dialogues, the agent (computer) serves as an expert that can provide guidance to a human novice. Research on such dialogues dates back more than twenty years (Deutsch 1974), and the subject remains an active research area (Allen et al. 1996; Lochbaum 1994; Walker 1996). However, most of that research has focused solely on verbal dialogues, even though the earliest studies clearly showed the ubiquity of nonverbal communication in human task-oriented dialogues (Deutsch 1974). To allow a wider variety of interactions among agents and human students, we use virtual reality (Durlach and Mavor 1995); agents and students cohabit a threedimensional, interactive, simulated mock-up of the student'
Where to look: a study of human-robot engagement
- IN PROCEEDINGS OF INTELLIGENT USER INTERFACES
, 2004
"... This paper reports on a study of human subjects with a robot designed to mimic human conversational gaze behavior in collaborative conversation. The robot and the human subject together performed a demonstration of an invention created at our laboratory; the demonstration lasted 3 to 3.5 minutes. We ..."
Abstract
-
Cited by 84 (4 self)
- Add to MetaCart
This paper reports on a study of human subjects with a robot designed to mimic human conversational gaze behavior in collaborative conversation. The robot and the human subject together performed a demonstration of an invention created at our laboratory; the demonstration lasted 3 to 3.5 minutes. We briefly discuss the robot architecture and then focus the paper on a study of the effects of the robot operating in two different conditions. We offer some conclusions based on the study about the importance of engagement for 3D IUIs. We will present video clips of the subject interactions with the robot at the conference.
Toward Virtual Humans
- AI Magazine
, 2006
"... Abstract This paper describes the virtual humans developed as part of the Mission Rehearsal Exercise project, a virtual reality based training system. This project is an ambitious exercise in integration, both in the sense of integrating technology with entertainment industry content, but also in t ..."
Abstract
-
Cited by 81 (18 self)
- Add to MetaCart
(Show Context)
Abstract This paper describes the virtual humans developed as part of the Mission Rehearsal Exercise project, a virtual reality based training system. This project is an ambitious exercise in integration, both in the sense of integrating technology with entertainment industry content, but also in that we have joined a number of component technologies that have not been integrated before. This integration has not only raised new research issues, but it has also suggested some new approaches to difficult problems. We describe the key capabilities of the virtual humans, including task representation and reasoning, natural language dialogue, and emotion reasoning, and show how these capabilities are integrated to provide more human-level intelligence than would otherwise be possible.