Results 1 - 10
of
161
Conversation and coordinative structures
- Topics in Cognitive Science
"... People coordinate body postures and gaze patterns during conversation. We review literature showing that (1) action embodies cognition, (2) postural coordination emerges spontaneously when two people converse, (3) gaze patterns influence postural coordination, (4) gaze coordination is a function of ..."
Abstract
-
Cited by 31 (4 self)
- Add to MetaCart
(Show Context)
People coordinate body postures and gaze patterns during conversation. We review literature showing that (1) action embodies cognition, (2) postural coordination emerges spontaneously when two people converse, (3) gaze patterns influence postural coordination, (4) gaze coordination is a function of common ground knowledge and visual information that conversants believe they share, and (5) gaze coordination is causally related to mutual understanding. We then consider how coordi-nation, generally, can be understood as temporarily coupled neuromuscular components that function as a collective unit known as a coordinative structure in the motor control literature. We speculate that the coordination of gaze and body sway found in conversation may be understood as a cross-person coordinative structure that embodies the goals of the joint action system.
Effects of anticipatory action on human-robot teamwork efficiency, fluency, and perception of team
- in Proceedings of the ACM/IEEE international conference on Humanrobot interaction (HRI’07
, 2007
"... A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts. In this work, we propose an adaptiv ..."
Abstract
-
Cited by 28 (0 self)
- Add to MetaCart
(Show Context)
A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts. In this work, we propose an adaptive action selection mechanism for a robotic teammate, making anticipatory decisions based on the confidence of their validity and their relative risk. We predict an improvement in task efficiency and fluency compared to a purely reactive process. We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the team’s fluency and success. By way of explanation, we propose a number of fluency metrics that differ significantly between the two study groups.
How Social Are Task Representations?
"... ABSTRACT—The classical Simon effect shows that actions are carried out faster if they spatially correspond to the stimulus signaling them. Recent studies revealed that this is the case even when the two actions are carried out by different people; this finding has been taken to imply that task repre ..."
Abstract
-
Cited by 27 (9 self)
- Add to MetaCart
(Show Context)
ABSTRACT—The classical Simon effect shows that actions are carried out faster if they spatially correspond to the stimulus signaling them. Recent studies revealed that this is the case even when the two actions are carried out by different people; this finding has been taken to imply that task representations are socially shared. In work described here, we found that the ‘‘interactive’ ’ Simon effect occurs only if actor and coactor are involved in a positive relationship (induced by a friendly-acting, cooperative confederate), but not if they are involved in a negative relationship (induced by an intimidating, competitive confederate). This result suggests that agents can represent self-generated and other-generated actions separately, but tend to relate or integrate these representations if the personal relationship between self and other has a positive valence. Tasks play a crucial role in people’s lives: People earn their salary by carrying out tasks, and researchers give tasks to the participants of their studies to investigate cognitive processes. Surprisingly, however, very little is known about how people cognitively represent the tasks they pursue. Only recently has the increasing empirical interest in cognitive-control processes generated findings that shed some light on task representations. Particularly important for present purposes, some of these findings have been taken to suggest that task representations are fundamentally social and shared among jointly acting individuals (for overviews, see Knoblich & Sebanz, 2006; Sebanz, Bekkering, & Knoblich, 2006). This claim receives its strongest support from studies on an apparently social version of the classical Simon task. The standard Simon effect is observed when people carry out spatially defined responses to nonspatial stimulus features, the location of which varies randomly. For instance, imagine that Address correspondence to Bernhard Hommel, Leiden University,
Prediction in joint action: What, when, and where
- Topics in Cognitive Science
, 2009
"... Drawing on recent findings in the cognitive and neurosciences, this article discusses how people manage to predict each other’s actions, which is fundamental for joint action. We explore how a common coding of perceived and performed actions may allow actors to predict the what, when, and where of o ..."
Abstract
-
Cited by 27 (2 self)
- Add to MetaCart
(Show Context)
Drawing on recent findings in the cognitive and neurosciences, this article discusses how people manage to predict each other’s actions, which is fundamental for joint action. We explore how a common coding of perceived and performed actions may allow actors to predict the what, when, and where of others ’ actions. The ‘‘what’ ’ aspect refers to predictions about the kind of action the other will perform and to the intention that drives the action. The ‘‘when’ ’ aspect is critical for all joint actions requiring close temporal coordination. The ‘‘where’ ’ aspect is important for the online coordination of actions because actors need to effectively distribute a common space. We argue that although common coding of perceived and performed actions alone is not sufficient to enable one to engage in joint action, it provides a representational platform for integrating the actions of self and other. The final part of the paper considers links between lower-level processes like action simulation and higher-level processes like verbal communication and mental state attribution that have previously been at the focus of joint action research.
Perspective taking: An organizing principle for learning in human-robot interaction
- in Proc. of the 21st National Conference on Artificial Intelligence (AAAI
, 2006
"... The ability to interpret demonstrations from the perspective of the teacher plays a critical role in human learning. Robotic systems that aim to learn effectively from human teachers must similarly be able to engage in perspective taking. We present an integrated architecture wherein the robot’s cog ..."
Abstract
-
Cited by 25 (0 self)
- Add to MetaCart
The ability to interpret demonstrations from the perspective of the teacher plays a critical role in human learning. Robotic systems that aim to learn effectively from human teachers must similarly be able to engage in perspective taking. We present an integrated architecture wherein the robot’s cognitive functionality is organized around the ability to understand the environment from the perspective of a social partner as well as its own. The performance of this architecture on a set of learning tasks is evaluated against human data derived from a novel study examining the importance of perspective taking in human learning. Perspective taking, both in humans and in our architecture, focuses the agent’s attention on the subset of the problem space that is important to the teacher. This constrained attention allows the agent to overcome ambiguity and incompleteness that can often be present in human demonstrations and thus learn what the teacher intends to teach.
Cost-Based Anticipatory Action Selection for Human-Robot Fluency
"... A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts. In this work we describe a model ..."
Abstract
-
Cited by 24 (7 self)
- Add to MetaCart
A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts. In this work we describe a model for human robot joint action, and propose an adaptive action selection mechanism for a robotic teammate, which makes anticipatory decisions based on the confidence of their validity and their relative risk. We conduct an analysis of our method, predicting an improvement in task efficiency compared to a purely reactive process. We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the team’s fluency and success. By way of explanation, we raise a number of fluency metric hypotheses, and evaluate their significance between the two study conditions.
Does the human motor system simulate Pinocchio's actions? Co-acting with a human hand versus a wooden hand in a dyadic interaction
- Psychological Science
, 2007
"... ABSTRACT—Corepresenting actions performed by conspe-cifics is essential to understanding their goals, inferring their mental states, and cooperating with them. It has recently been demonstrated that joint-action effects in a Simon task provide a good index for corepresentation. In the present study, ..."
Abstract
-
Cited by 24 (3 self)
- Add to MetaCart
(Show Context)
ABSTRACT—Corepresenting actions performed by conspe-cifics is essential to understanding their goals, inferring their mental states, and cooperating with them. It has recently been demonstrated that joint-action effects in a Simon task provide a good index for corepresentation. In the present study, we investigated whether corepresenta-tion is restricted to biological agents or also occurs for nonbiological events. Participants performed a Simon task either with an image of a human hand or with a wooden analogue. The Simon-like effect emerged only when par-ticipants coacted with a biological agent. The lack of the joint-action effect when participants interacted with a wooden hand indicates that the human corepresentation system is biologically tuned. Engaging in interactions with other individuals is a fundamental part of daily life (Sebanz, Bekkering, & Knoblich, 2006). On the motor level, joint action requires sharing representations with others, anticipating their behaviors, and coordinating one’s actions with them. But how can people integrate other people’s actions in their own motor plans? A common coding network of perceived and executed actions that has recently received support from cognitive psychology, brain imaging, and neuro-physiology (Brass & Heyes, 2005; Rizzolatti & Craighero, 2004) provides a powerful solution to this problem (Prinz, 1997):When one individual observes an action made by another, a corre-sponding motor representation is automatically activated in the
Embodied Attention and Word Learning by Toddlers
"... Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist’s view nor the mature partner’s view, but is rather from the learner’ ..."
Abstract
-
Cited by 17 (4 self)
- Add to MetaCart
(Show Context)
Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist’s view nor the mature partner’s view, but is rather from the learner’s personal view. Here we show that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant. If parents named the object during these moments of bottom-up selectivity, later forced-choice tests showed that infants learned the name, but did not when naming occurred during a less visually selective moment. The momentary visual input for parents and toddlers was captured via head cameras placed low on each participant’s forehead as parents played with and named objects for their infant. Frame-by-frame analyses of the head camera images at and around naming moments were conducted to determine the visual properties at input that were associated with learning. The analyses indicated that learning occurred when bottom-up visual information was clean and uncluttered. The sensory-motor behaviors of infants and parents were also analyzed to determine how their actions on the objects may have created these optimal visual moments for learning. The results are discussed with respect to early word learning, embodied attention, and the social role of parents in early word learning. 2 Infants learn their first words through the co-occurrence of a heard word and a visual
Modulation of the action control system by social intention: unexpected social requests override preplanned action
- J. Exp
, 2009
"... Four experiments investigated the influence of a sudden social request on the kinematics of a preplanned action. In Experiment 1, participants were requested to grasp an object and then locate it within a container (unperturbed trials). On 20 % of trials, a human agent seated nearby the participant ..."
Abstract
-
Cited by 15 (2 self)
- Add to MetaCart
(Show Context)
Four experiments investigated the influence of a sudden social request on the kinematics of a preplanned action. In Experiment 1, participants were requested to grasp an object and then locate it within a container (unperturbed trials). On 20 % of trials, a human agent seated nearby the participant unexpect-edly stretched out her arm and unfolded her hand as if to ask for the object (perturbed trials). In the remaining 3 experiments, similar procedures were adopted except that (a) the human was replaced by a robotic agent, (b) the gesture performed by the human agent did not imply a social request, and (c) the gaze of the human agent was not available. Only when the perturbation was characterized by a social request involving a human agent were there kinematic changes to the action directed toward the target. Conversely, no effects on kinematics were evident when the perturbation was caused by the robotic agent or by a human agent performing a nonsocial gesture. These findings are discussed in the light of current theories proposed to explain the effects of social context on the control of action.
Joint-Action for Humans and Industrial Robots for Assembly Tasks
"... Abstract — This paper presents a concept of a smart working environment designed to allow true joint-actions of humans and industrial robots. The proposed system perceives its environment with multiple sensor modalities and acts in it with an industrial robot manipulator to assemble capital goods to ..."
Abstract
-
Cited by 14 (9 self)
- Add to MetaCart
(Show Context)
Abstract — This paper presents a concept of a smart working environment designed to allow true joint-actions of humans and industrial robots. The proposed system perceives its environment with multiple sensor modalities and acts in it with an industrial robot manipulator to assemble capital goods together with a human worker. In combination with the reactive behavior of the robot, safe collaboration between the human and the robot is possible.Furthermore, the system anticipates human behavior, based on knowledge databases and decision processes, ensuring an effective collaboration between the human and robot. As a proof of concept, we introduce a use case where an arm is assembled and mounted on a robot’s body. I.