Results 1 - 10
of
178
A Meta-Study of Algorithm Visualization Effectiveness
"... Algorithm visualization (AV) technology graphically illustrates how algorithms work. Despite the intuitive appeal of the technology, it has failed to catch on in mainstream computer science education. Some have attributed this failure to the mixed results of experimental studies designed to substant ..."
Abstract
-
Cited by 166 (2 self)
- Add to MetaCart
Algorithm visualization (AV) technology graphically illustrates how algorithms work. Despite the intuitive appeal of the technology, it has failed to catch on in mainstream computer science education. Some have attributed this failure to the mixed results of experimental studies designed to substantiate AV technology's educational effectiveness. However, while several integrative reviews of AV technology have appeared, none has focused specifically on the software's effectiveness by analyzing this body of experimental studies as a whole. In order to better understand the effectiveness of AV technology, we present a systematic metastudy of 24 experimental studies. We pursue two separate analyses: an analysis of independent variables, in which we tie each study to a particular guiding learning theory in an attempt to determine which guiding theory has had the most predictive success; and an analysis of dependent variables, which enables us to determine which measurement techniques have been most sensitive to the learning benefits of AV technology. Our most significant finding is that how students use AV technology has a greater impact on effectiveness than what AV technology shows them. Based on our findings, we formulate an agenda for future research into AV effectiveness.
Criteria for evaluating usability evaluation methods
- International Journal of Human-Computer Interaction
, 2001
"... The current variety of alternative approaches to usability evaluation methods (UEMs) designed to assess and improve usability in software systems is offset by a general lack of understanding of the capabilities and limitations of each. Practitioners need to know which methods are more effective and ..."
Abstract
-
Cited by 90 (0 self)
- Add to MetaCart
(Show Context)
The current variety of alternative approaches to usability evaluation methods (UEMs) designed to assess and improve usability in software systems is offset by a general lack of understanding of the capabilities and limitations of each. Practitioners need to know which methods are more effective and in what ways and for what purposes. However, UEMs cannot be evaluated and compared reliably because of the lack of standard criteria for comparison. In this article, we present a practical discussion of factors, comparison criteria, and UEM performance measures useful in studies comparing UEMs. In demonstrating the importance of developing appropriate UEM evaluation criteria, we offer operational definitions and possible measures of UEM performance. We highlight specific challenges that researchers and practitioners face in comparing UEMs and provide a point of departure for further discussion and refinement of the principles and techniques used to approach UEM evaluation and comparison. 1.
New Techniques for Usability Evaluation of Mobile Systems
- International Journal of Human–Computer Studies
"... Usability evaluation of systems for mobile computers and devices is an emerging area of research. This paper presents and evaluates six techniques for evaluating the usability of mobile computer systems in laboratory settings. The purpose of these techniques is to facilitate systematic data collecti ..."
Abstract
-
Cited by 81 (10 self)
- Add to MetaCart
(Show Context)
Usability evaluation of systems for mobile computers and devices is an emerging area of research. This paper presents and evaluates six techniques for evaluating the usability of mobile computer systems in laboratory settings. The purpose of these techniques is to facilitate systematic data collection in a controlled environment and support the identification of usability problems that are experienced in mobile use. The proposed techniques involve various aspects of physical motion combined with either needs for navigation in physical space or division of attention. The six techniques are evaluated through two usability experiments where walking in a pedestrian street was used as a reference. Each of the proposed techniques had some similarities to testing in the pedestrian street, but none of them turned out to be completely comparable to that form of field-evaluation. Seating the test subjects at a table supported identification of significantly more usability problems than any of the other proposed techniques. However a large number of the additional problems identified using this technique were categorized as cosmetic. When increasing the amount of physical activity, the test subjects also experienced a significantly increased subjective workload. r 2003 Elsevier Ltd. All rights reserved. 1.
2004b, Is it worth the hassle? Exploring the added value of evaluating the usability of context-aware mobile systems in the field
- Proceedings of Mobile HCI 2004
"... Abstract. Evaluating the usability of mobile systems raises new concerns and questions, challenging methods for both lab and field evaluations. A recent lit-erature study showed that most mobile HCI research projects apply lab-based evaluations. Nevertheless, several researchers argue in favour of f ..."
Abstract
-
Cited by 72 (7 self)
- Add to MetaCart
(Show Context)
Abstract. Evaluating the usability of mobile systems raises new concerns and questions, challenging methods for both lab and field evaluations. A recent lit-erature study showed that most mobile HCI research projects apply lab-based evaluations. Nevertheless, several researchers argue in favour of field evalua-tions as mobile systems are highly context-dependent. However, field-based usability studies are difficult to conduct, time consuming and the added value is unknown. Contributing to this discussion, this paper compares the results pro-duced by a laboratory- and a field-based evaluation of the same context-aware mobile system on their ability to identify usability problems. Six test subjects used the mobile system in a laboratory while another six used the system in the field. The results show that the added value of conducting usability evaluations in the field is very little and that recreating central aspects of the use context in a laboratory setting enables the identification of the same usability problem list. 1
Enabling effective human-robot interaction using perspective-taking in robots
- IEEE Transactions on Systems, Man, and Cybernetics
, 2005
"... Abstract—We propose that an important aspect of human–robot interaction is perspective-taking. We show how perspective-taking occurs in a naturalistic environment (astronauts working on a collaborative project) and present a cognitive architecture for performing perspective-taking called Polyscheme. ..."
Abstract
-
Cited by 67 (10 self)
- Add to MetaCart
(Show Context)
Abstract—We propose that an important aspect of human–robot interaction is perspective-taking. We show how perspective-taking occurs in a naturalistic environment (astronauts working on a collaborative project) and present a cognitive architecture for performing perspective-taking called Polyscheme. Finally, we show a fully integrated system that instantiates our theoretical framework within a working robot system. Our system successfully solves a series of perspective-taking problems and uses the same frames of references that astronauts do to facilitate collaborative problem solving with a person. Index Terms—Cognitive modeling, human–robot-interaction, perspective-taking.
H.K.: Usability measurement and metrics: a consolidated model
- Software Quality Journal
, 2006
"... Abstract Usability is increasingly recognized as an important quality factor for interactive software systems, including traditional GUIs-style applications, Web sites, and the large variety of mobile and PDA interactive services. Unusable user interfaces are probably the single largest reasons why ..."
Abstract
-
Cited by 55 (0 self)
- Add to MetaCart
(Show Context)
Abstract Usability is increasingly recognized as an important quality factor for interactive software systems, including traditional GUIs-style applications, Web sites, and the large variety of mobile and PDA interactive services. Unusable user interfaces are probably the single largest reasons why encompassing interactive systems – computers plus people, fail in actual use. The design of this diversity of applications so that they actually achieve their intended purposes in term of ease of use is not an easy task. Although there are many individual methods for evaluating usability; they are not well integrated into a single con-ceptual framework that facilitate their usage by developers who are not trained in the filed of HCI. This is true in part because there are now several different standards (e.g., ISO 9241, ISO/IEC 9126, IEEE Std.610.12) or conceptual models (e.g., Metrics for Usability Standards in Computing [MUSiC]) for usability, and not all of these standards or models describe the same operational definitions and measures. This paper first reviews existing usability standards and models while highlighted the limitations and complementarities of the various standards. It then explains how these various models can be unified into a single consolidated, hierarchical model of usability measurement. This consolidated model is called
User-centred design
- In Bainbridge, W. Encyclopaedia of Human-Computer Interaction. Thousand Oaks: Sage
, 2004
"... The design of everyday objects is not always intuitive and at times it leaves the user frustrated and unable to complete a simple task. How many of us have bought a VCR that we have struggled to used and missed recording our favorite programs because we misunderstood the instructions or had to put u ..."
Abstract
-
Cited by 54 (0 self)
- Add to MetaCart
(Show Context)
The design of everyday objects is not always intuitive and at times it leaves the user frustrated and unable to complete a simple task. How many of us have bought a VCR that we have struggled to used and missed recording our favorite programs because we misunderstood the instructions or had to put up with the clock blinking 12:00 because we didn’t know how to
Evaluating the usability of a mobile guide: The influence of location, participants and resources
- Behaviour and Information Technology
, 2005
"... When designing a usability evaluation, choices must be made regarding methods and techniques for data collection and analysis. Mobile guides raise new concerns and challenges to established usability evaluation approaches. Not only are they typically closely related to objects and activities in the ..."
Abstract
-
Cited by 33 (4 self)
- Add to MetaCart
When designing a usability evaluation, choices must be made regarding methods and techniques for data collection and analysis. Mobile guides raise new concerns and challenges to established usability evaluation approaches. Not only are they typically closely related to objects and activities in the user’s immediate surroundings, they are often used while the user is ambulating. This paper presents results from an extensive, multi-method evaluation of a mobile guide designed to support the use of public transport in Melbourne, Australia. In evaluating the guide, we applied four different techniques; field-evaluation, laboratory evaluation, heuristic walkthrough and rapid reflection. This paper describes these four approaches and their respective outcomes, and discusses their relative strengths and weaknesses for evaluating the usability of mobile guides.
The methodology of participatory design
- Technical Communication
, 2005
"... Provides the historical and methodological grounding for understanding participatory design as a methodology Describes its research designs, methods, criteria, and limitations Provides guidance for applying it to technical communication research ..."
Abstract
-
Cited by 33 (0 self)
- Add to MetaCart
(Show Context)
Provides the historical and methodological grounding for understanding participatory design as a methodology Describes its research designs, methods, criteria, and limitations Provides guidance for applying it to technical communication research
Replacing Usability Testing with User Dialogue
, 1999
"... this article we outline four examples to show how we have turned the conventional usabilitytesting format into a dialogue between users and designers. ..."
Abstract
-
Cited by 25 (5 self)
- Add to MetaCart
this article we outline four examples to show how we have turned the conventional usabilitytesting format into a dialogue between users and designers.