Results 1 - 10
of
10
The fundamental principle of coactive design: Interdependence must shape autonomy
- In M
, 2011
"... Abstract. This article presents the fundamental principle of Coactive Design, a new approach being developed to address the increasingly sophisticated roles for both people and agents in mixed human-agent systems. The fundamental principle of Coactive Design is that the underlying interdependence of ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
(Show Context)
Abstract. This article presents the fundamental principle of Coactive Design, a new approach being developed to address the increasingly sophisticated roles for both people and agents in mixed human-agent systems. The fundamental principle of Coactive Design is that the underlying interdependence of participants in joint activity is a critical factor in the design of human-agent systems. In order to enable appropriate interaction, an understanding of the potential interdependencies among groups of humans and agents working together in a given situation should be used to shape the way agent architectures and individual agent capabilities for autonomy are designed. Increased effectiveness in human-agent teamwork hinges not merely on trying to make agents more independent through their autonomy, but also in striving to make them more capable of sophisticated interdependent joint activity with people.
Trust-Driven Interactive Visual Navigation for Autonomous Robots
"... Abstract — We describe a model of “trust ” in human-robot systems that is inferred from their interactions, and inspired by similar concepts relating to trust among humans. This computable quantity allows a robot to estimate the extent to which its performance is consistent with a human’s expectatio ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
Abstract — We describe a model of “trust ” in human-robot systems that is inferred from their interactions, and inspired by similar concepts relating to trust among humans. This computable quantity allows a robot to estimate the extent to which its performance is consistent with a human’s expectations, with respect to task demands. Our trust model drives an adaptive mechanism that dynamically adjusts the robot’s autonomous behaviors, in order to improve the efficiency of the collaborative team. We illustrate this trust-driven methodology through an interactive visual robot navigation system. This system is evaluated through controlled user experiments and a field demonstration using an aerial robot. I.
Adaptive Parameter EXploration (APEX): Adaptation of Robot Autonomy from Human Participation
, 2014
"... Abstract — The problem of Adaptation from Participation (AfP) aims to improve the efficiency of a human-robot team by adapting a robot’s autonomous systems and behaviors based on command-level input from a human supervisor. As a solution to AfP, the Adaptive Parameter EXploration (APEX) algorithm co ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
Abstract — The problem of Adaptation from Participation (AfP) aims to improve the efficiency of a human-robot team by adapting a robot’s autonomous systems and behaviors based on command-level input from a human supervisor. As a solution to AfP, the Adaptive Parameter EXploration (APEX) algorithm continuously explores the space of all possible parameter configurations for the robot’s autonomous system in an online and anytime manner. Guided by information deduced from the human’s latest intervening commands, APEX is capable of adapting an arbitrary robot system to dynamic changes in task objectives and conditions during a session. We explore this framework within visual navigation contexts where the humanrobot team is tasked with covering or patrolling over multiple terrain boundaries such as coastlines and roads. We present empirical evaluations of two separate APEX-enabled systems: the first, deployed on an aerial robot within a controlled environment, and the second, on a wheeled robot operating within a challenging university campus setting. I.
Using On-Line Conditional Random Fields to Determine Human Intent for Peer-To-Peer Human Robot Teaming
"... Abstract — In this paper we introduce a system under development to enable humans and robots to collaborate as peers on tasks in a shared physical environment, using only implicit coordination. Our system uses Conditional Random Fields to determine the human’s intended goal. We show the effects of u ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract — In this paper we introduce a system under development to enable humans and robots to collaborate as peers on tasks in a shared physical environment, using only implicit coordination. Our system uses Conditional Random Fields to determine the human’s intended goal. We show the effects of using different features to improve accuracy and the time to the correct classification. We compare the performance of the Conditional Random Fields classifiers by testing the classification accuracy with both the full observation sequence, as well as accuracy when the observations are classified as the observations occur. We show that Conditional Random Fields work well for classifying the goal of a human in a box pushing domain where the human can select one of three tasks. We discuss how this research fits into a larger system we are developing for peer-to-peer human robot teams for shared workspace interactions.
A Policy Blending Formalism for Shared Control
"... Abstract—In shared control teleoperation, the robot assists the user in accomplishing the desired task, making teleoperation easier and more seamless. Rather than simply executing the user’s input, which is hindered by the inadequacies of the interface, the robot attempts to predict the user’s inten ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Abstract—In shared control teleoperation, the robot assists the user in accomplishing the desired task, making teleoperation easier and more seamless. Rather than simply executing the user’s input, which is hindered by the inadequacies of the interface, the robot attempts to predict the user’s intent, and assists in accomplishing it. In this work, we are interested in the scientific underpinnings of assistance: we propose an intuitive formalism that captures assistance as policy blending, illustrate how some of the existing techniques for shared control instantiate it, and provide a principled analysis of its main components: prediction of user intent and its arbitration with the user input. We define the prediction problem, with foundations in Inverse Reinforcement Learning, discuss simplifying assumptions that make it tractable, and test these on data from users teleoperating a robotic manipulator. We define the arbitration problem from a control-theoretic perspective, and turn our attention to what users consider good arbitration. We conduct a user study that analyzes the effect of different factors on the performance of assistance, indicating that arbitration should be contextual: it should depend on the robot’s confidence in itself and in the user, and even the particulars of the user. Based on the study, we discuss challenges and opportunities that a robot sharing the control with the user might face: adaptation to the context and the user, legibility of behavior, and the closed loop between prediction and user behavior. I.
Coactive design: Why interdependence must shape autonomy
- In Coordination, Organizations, Institutions, and Norms in Agent Systems VI
, 2010
"... Abstract. This paper introduces Coactive Design as a new approach to address the increasingly sophisticated roles for people and agents in mixed human-agent systems. The basic premise of Coactive Design is that the underlying interdependence of joint activity is the critical design feature. When des ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract. This paper introduces Coactive Design as a new approach to address the increasingly sophisticated roles for people and agents in mixed human-agent systems. The basic premise of Coactive Design is that the underlying interdependence of joint activity is the critical design feature. When designing the capabilities that make an agent autonomous, the process should be guided by an understanding of interdependence within the joint activity. This understanding can then be used to shape the implementation of agent capabilities so as to enable appropriate interaction. The success of future human-agent teams hinges not merely on trying to make agents more autonomous, but also in striving to make them more capable of sophisticated interdependent activity.
Online Learning Techniques for Improving Robot Navigation in Unfamiliar Domains
, 2010
"... Many mobile robot applications require robots to act safely and intelligently in complex unfamiliar environments with little structure and limited or unavailable human supervision. As a robot is forced to operate in an environment that it was not engineered or trained for, various aspects of its per ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Many mobile robot applications require robots to act safely and intelligently in complex unfamiliar environments with little structure and limited or unavailable human supervision. As a robot is forced to operate in an environment that it was not engineered or trained for, various aspects of its performance will inevitably degrade. Roboticists equip robots with powerful sensors and data sources to deal with uncertainty, only to discover that the robots are able to make only minimal use of this data and still find themselves in trouble. Similarly, roboticists develop and train their robots in representative areas, only to discover that they encounter new situations that are not in their experience base. Small problems resulting in mildly sub-optimal performance are often tolerable, but major failures resulting in vehicle loss or compromised human safety are not. This thesis presents a series of online algorithms to enable a mobile robot to better deal with uncertainty in unfamiliar domains in order to improve its navigational abilities, better utilize available data and resources and reduce risk to the vehicle. We validate these algorithms through extensive testing onboard large mobile robot systems and argue how such approaches can increase the reliability and robustness of mobile robots, bringing them closer to the capabilities
Bandit-Based Online Candidate Selection for Adjustable Autonomy
"... Abstract In many robot navigation scenarios, the robot is able to choose between some number of operating modes. One such scenario is when a robot must decide how to trade-off online between autonomous and human tele-operation control. When little prior knowledge about the performance of each operat ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract In many robot navigation scenarios, the robot is able to choose between some number of operating modes. One such scenario is when a robot must decide how to trade-off online between autonomous and human tele-operation control. When little prior knowledge about the performance of each operator is known, the robot must learn online to model their abilities and be able to take advantage of the strengths of each. We present a bandit-based online candidate selection algorithm that operates in this adjustable autonomy setting and makes choices to optimize overall navigational performance. We justify this technique through such a scenario on logged data and demonstrate how the same technique can be used to optimize the use of high-resolution overhead data when its availability is limited 1. 1
Trust-Driven Interactive Visual Navigation for Autonomous Robots
"... Abstract — We describe a model of “trust ” in human-robot systems that is inferred from their interactions, and inspired by similar concepts relating to trust among humans. This computable quantity allows a robot to estimate the extent to which its performance is consistent with a human’s expectatio ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract — We describe a model of “trust ” in human-robot systems that is inferred from their interactions, and inspired by similar concepts relating to trust among humans. This computable quantity allows a robot to estimate the extent to which its performance is consistent with a human’s expectations, with respect to task demands. Our trust model drives an adaptive mechanism that dynamically adjusts the robot’s autonomous behaviors, in order to improve the efficiency of the collaborative team. We illustrate this trust-driven methodology through an interactive visual robot navigation system. This system is evaluated through controlled user experiments and a field demonstration using an aerial robot. I.