Results 1 - 10
of
401
Trust in automation: Designing for appropriate reliance
- HUMAN FACTORS
, 2004
"... Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation ..."
Abstract
-
Cited by 191 (4 self)
- Add to MetaCart
(Show Context)
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.
Common metrics for human-robot interaction
- In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction (2006), ACM
"... MD This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally ..."
Abstract
-
Cited by 102 (5 self)
- Add to MetaCart
(Show Context)
MD This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.
The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task
, 2003
"... ..."
Shared understanding for collaborative control
- IEEE Transactions on Systems, Man, and Cybernetics - Part A
, 2004
"... Abstract—This paper presents results from three experiments in which human operators were teamed with a mixed-initiative robot control system to accomplish various indoor search and exploration tasks. By assessing human workload and error to-gether with overall performance, these experiments provide ..."
Abstract
-
Cited by 39 (9 self)
- Add to MetaCart
(Show Context)
Abstract—This paper presents results from three experiments in which human operators were teamed with a mixed-initiative robot control system to accomplish various indoor search and exploration tasks. By assessing human workload and error to-gether with overall performance, these experiments provide an objective means to contrast different modes of robot autonomy and to evaluate both the usability of the interface and the ef-fectiveness of autonomous robot behavior. The first experiment compares the performance achieved when the robot takes initia-tive to support human driving with the opposite case when the human takes initiative to support autonomous robot driving. The utility of robot autonomy is shown through achievement of better performance when the robot is in the driver’s seat. The second experiment introduces a virtual three-dimensional (3-D) map representation that supports collaborative understanding of the task and environment. When used in place of video, the 3-D map reduced operator workload and navigational error. By lowering bandwidth requirements, use of the virtual 3-D interface enables long-range, nonline-of-sight communication. Results from the third experiment extend the findings of experiment 1 by showing that collaborative control can increase performance and reduce error even when the complexity of the environment is increased and workload is distributed amongst multiple operators. Index Terms—Dynamic autonomy, human–robot interaction (HRI), mixed initiative, shared control. I.
Designing for Flexible Interaction Between Humans and Automation: Delegation Interfaces for Supervisory Control
"... Objective: To develop a method enabling human-like, flexible supervisory control via delegation to automation. Background: Real-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide import ..."
Abstract
-
Cited by 33 (3 self)
- Add to MetaCart
(Show Context)
Objective: To develop a method enabling human-like, flexible supervisory control via delegation to automation. Background: Real-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance. Method: We review problems with static and adaptive (as opposed to “adaptable”) automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a “level of automation” as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook ® is described. Results: On the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team’s “playbook. ” Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles. Conclusion: Delegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level. Application: Most applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.
Formal verification of humanautomation interaction
- Human Factors
, 2002
"... This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The ..."
Abstract
-
Cited by 30 (8 self)
- Add to MetaCart
(Show Context)
This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine, so as to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of the information, provided to the user via training material (e.g., user manual), about the machine’s behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot’s interaction with an autopilot onboard a modern commercial aircraft. Running head: human-automation interaction. Key words: automation, modeling, design of interfaces, formal-methods, verification, cockpit design.
Adaptive automation: Sharing and trading of control
- In E. Hollnagel (Ed.), Handbook of cognitive task design (pp. 147–169). Mahwah, NJ: Erlbaum
, 2003
"... Function allocation is the design decision to determine which functions are to be performed by humans and which are to be performed by machines to achieve the required system goals, and it is closely related to the issue of automation. Some of the traditional strategies of function allocation includ ..."
Abstract
-
Cited by 28 (1 self)
- Add to MetaCart
(Show Context)
Function allocation is the design decision to determine which functions are to be performed by humans and which are to be performed by machines to achieve the required system goals, and it is closely related to the issue of automation. Some of the traditional strategies of function allocation include (a) assigning each function to the most capable agent (either human or machine), (b) allocating to machine every function that can be automated, and (c) finding an allocation scheme that ensures economical efficiency. However, such “who does what ” decisions are not always appropriate from human factors viewpoints. This chapter clarifies why “who does what and when ” considerations are necessary, and it explains the concept of adaptive automation in which the control of functions shifts between humans and machines dynamically, depending on environmental factors, operator workload, and performance. Who decides when the control of function must be shifted? That is one of the most crucial issues in adaptive automation. Letting the computer be in authority may conflict with the principle of human-centered automation which claims that the human must be maintained as the final authority over the automation. Qualitative discussions cannot solve the authority problem. This chapter proves the need for quantitative investigations with mathematical models, simulations, and experiments for a better understanding of the authority issue. Starting with the concept of function allocation, this chapter describes how the concept of adaptive automation was invented. The concept of levels of automation is used to explain interactions between humans and machines. Sharing and trading are distinguished to clarify the types of human-automation collaboration. Algorithms for implementing adaptive automation are categorized into three groups, and comparisons are made among them. Benefits and costs of adaptive automation, in relation to decision authority, trust-related issues, and human-interface design, are discussed with some examples.
A Flexible Delegation‐Type Interface Enhances System Performance in Human Supervision of Multiple Robots: Empirical Studies With RoboFlag
- IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans July 2005
"... Abstract—Three experiments and a computational analysis were conducted to investigate the effects of a delegation-type interface on human supervision of simulated multiple unmanned vehicles. Participants supervised up to eight robots using automated be-haviors (“plays”), manual (waypoint) control, o ..."
Abstract
-
Cited by 28 (5 self)
- Add to MetaCart
(Show Context)
Abstract—Three experiments and a computational analysis were conducted to investigate the effects of a delegation-type interface on human supervision of simulated multiple unmanned vehicles. Participants supervised up to eight robots using automated be-haviors (“plays”), manual (waypoint) control, or both to capture the flag of an opponent with an equal number of robots, using a simple form of a delegation-type interface, Playbook. Experi-ment 1 showed that the delegation interface increased mission suc-cess rate and reduced mission completion time when the opponent “posture ” was unpredictably offensive or defensive. Experiment 2 showed that performance was superior when operators could flex-ibly use both automated behaviors and manual control, although there was a small increase in subjective workload. Experiment 3 investigated additional dimensions of flexibility by comparing del-egation interfaces to restricted interfaces. Eight interfaces were tested, varying in the level of abstraction at which robot behavior could be tasked and the level of aggregation (single or multiple robots) to which plays could be assigned. Performance was supe-rior with flexible interfaces for four robots, but this benefit was eliminated when eight robots had to be supervised. Finally, a com-putational analysis using task-network modeling and Monte Carlo simulation gave results that closely paralleled the empirical data on changes in workload across interface type. The results provide ini-tial empirical evidence for the efficacy of delegation-type interfaces in human supervision of a team of multiple autonomous robots. Index Terms—Automation, delegation, human–robot interac-tion, Playbook, unmanned vehicles. I.
Complacency and bias in human use of automation: An attentional integration. Human Factors
- IEEE Transactions on Systems, Man and Cybernetics B
, 2010
"... Objective: Our aim was to review empirical stud-ies of complacency and bias in human interaction with automated and decision support systems and provide an integrated theoretical model for their explanation. Background: Automation-related complacency and automation bias have typically been considere ..."
Abstract
-
Cited by 26 (1 self)
- Add to MetaCart
Objective: Our aim was to review empirical stud-ies of complacency and bias in human interaction with automated and decision support systems and provide an integrated theoretical model for their explanation. Background: Automation-related complacency and automation bias have typically been considered separately and independently. Methods: Studies on complacency and automation bias were analyzed with respect to the cognitive pro-cesses involved. Results: Automation complacency occurs under con-ditions of multiple-task load, when manual tasks compete with the automated task for the operator’s attention. Automation complacency is found in both naive and expert participants and cannot be overcome with sim-ple practice. Automation bias results in making both omis-sion and commission errors when decision aids are imperfect. Automation bias occurs in both naive and expert participants, cannot be prevented by training or instruc-tions, and can affect decision making in individuals as well as in teams. While automation bias has been conceived of as a special case of decision bias, our analysis suggests that it also depends on attentional processes similar to those involved in automation-related complacency. Conclusion: Complacency and automation bias repre-sent different manifestations of overlapping automation-induced phenomena, with attention playing a central role. An integrated model of complacency and automation bias shows that they result from the dynamic interaction of per-sonal, situational, and automation-related characteristics. Application: The integrated model and attentional synthesis provides a heuristic framework for further research on complacency and automation bias and design options for mitigating such effects in automated and deci-sion support systems.
The need for command and control instant message adaptive interfaces: Lessons learned from Tactical Tomahawk human-in-the-loop simulations
- CyberPsychology and Behavior
, 2004
"... In the recent development of a human-in-the-loop simulation test bed designed to examine human performance issues for supervisory control of the Navy’s new Tactical Tomahawk missile, measurements of operator situation awareness (SA) and workload through secondary tasking were taken through an embedd ..."
Abstract
-
Cited by 25 (7 self)
- Add to MetaCart
(Show Context)
In the recent development of a human-in-the-loop simulation test bed designed to examine human performance issues for supervisory control of the Navy’s new Tactical Tomahawk missile, measurements of operator situation awareness (SA) and workload through secondary tasking were taken through an embedded instant messaging program. Instant message interfaces (otherwise known as “chat”), already a means of communication between Navy ships, allow researchers to query users in real-time in a natural, ecologic setting, and thus provide more realistic and unobtrusive measurements. However, in the course of this testing, results revealed that some subjects fixated on the real-time instant messaging secondary task instead of the primary task of missile control, leading to the overall degradation of mission performance as well as a loss of SA. While this research effort was the first to quantify command and control performance degradation as a result of instant messaging, the military has recognized that in its network centric warfare quest, instant messaging is a critical informal communication tool, but has associated problems. Recently a military spokesman said that managing chat in current military operations was sometimes a “nightmare ” because military personnel have difficulty in handling large amounts of information through chat, and then synthesizing knowledge from this information. This research highlights the need for further investigation of the role of instant messaging interfaces both on task performance and situation awareness, and specifies how the associated problems could be ameliorated through adaptive display design.