Results 1 - 10
of
101
Interaction Techniques for Ambiguity Resolution in Recognition-Based Interfaces
- PROC. UIST 2000
, 2000
"... Because of its promise of natural interaction, recognition is coming into its own as a mainstream technology for use with computers. Both commercial and research applications are beginning to use it extensively. However the errors made by recognizers can be quite costly, and this is increasingly bec ..."
Abstract
-
Cited by 64 (5 self)
- Add to MetaCart
Because of its promise of natural interaction, recognition is coming into its own as a mainstream technology for use with computers. Both commercial and research applications are beginning to use it extensively. However the errors made by recognizers can be quite costly, and this is increasingly becoming a focus for researchers. We present a survey of existing error correction techniques in the user interface. These mediation techniques most commonly fall into one of two strategies, repetition and choice. Based on the needs uncovered by this survey, we have developed OOPS, a toolkit that supports resolution of input ambiguity through mediation. This paper describes four new interaction techniques built using OOPS, and the toolkit mechanisms required to build them. These interaction techniques each address problems not directly handled by standard approaches to mediation, and can all be re-used in a variety of settings.
On the effective use and reuse of HCI knowledge
- ACM Transactions on Computer-Human Interaction
, 2000
"... The paper argues that new approaches for delivering HCI knowledge from theory to designers will be necessary in the new millennium. First the progress made developing cognitive theories of interaction and their impact on the design process is reviewed. Direct application of current cognitive theorie ..."
Abstract
-
Cited by 62 (3 self)
- Add to MetaCart
(Show Context)
The paper argues that new approaches for delivering HCI knowledge from theory to designers will be necessary in the new millennium. First the progress made developing cognitive theories of interaction and their impact on the design process is reviewed. Direct application of current cognitive theories to design has been limited by scalability problems. This has led to bridging models that attempt to deliver insights from theory to design models in a more tractable manner. However, these too have met with limited success. An alternative is to represent HCI knowledge as claims and adopt the task-artefact approach to design in which theories are embedded in well-designed artefacts and explained to designers as psychologically motivated design rationale. Claims are proposed as a possible bridging representation that may enable theories to frame appropriate recommendations for designers and, vice versa, enable designers to ask appropriate questions for theoretical research. However, claims provide design advice grounded in specific scenarios and examples, which 1 limits their generality. Hence claims are their associated artefacts needs to be generalised so
Providing integrated toolkit-level support for ambiguity in recognition-based interfaces
, 2000
"... Recognition technologies are being used extensively in both the commercial and research worlds. But recognizers are still error-prone, and this results in performance problems and brittle dialogues. These problems are a barrier to acceptance and usefulness of recognition systems. Better interfaces t ..."
Abstract
-
Cited by 55 (11 self)
- Add to MetaCart
(Show Context)
Recognition technologies are being used extensively in both the commercial and research worlds. But recognizers are still error-prone, and this results in performance problems and brittle dialogues. These problems are a barrier to acceptance and usefulness of recognition systems. Better interfaces to recognition systems, which can help to reduce the burden of recognition errors, are difficult to build because of lack of knowledge about the ambiguity inherent in recognition. We have extended a user interface toolkit in order to model and to provide structured support for ambiguity at the input event level [7]. This makes it possible to build re-usable interface components for resolving ambiguity and dealing with recognition errors. These interfaces can help to reduce the negative effects of recognition errors. By providing these components at a toolkit level, we make it easier for application writers to provide good support for error handling. And we can explore new types of interfaces for resolving a more varied range of ambiguity.
Something from nothing : Augmenting a paperbased work practice via multimodal interaction
- in Proceedings of the ACM Designing Augmented Reality Environments
, 2000
"... In this paper, we describe Rasa: an environment designed to augment, rather than replace, the work habits of its users. These work habits include drawing on Post-it notes using a symbolic language. Rasa observes and understands this language, assigning meaning simultaneously to objects in both the p ..."
Abstract
-
Cited by 41 (7 self)
- Add to MetaCart
In this paper, we describe Rasa: an environment designed to augment, rather than replace, the work habits of its users. These work habits include drawing on Post-it notes using a symbolic language. Rasa observes and understands this language, assigning meaning simultaneously to objects in both the physical and virtual worlds. With Rasa, users rollout a paper map, register it, and move the augmented objects from one place to another on it. Once an object is augmented, users can modify the meaning represented by it, ask questions about that representation, view it in virtual reality, or give directions to it, all with speech and gestures. We examine the way Rasa uses language to augment objects, and compare it with prior methods, arguing that language is a more visible, flexible, and comprehensible method for creating augmentations than other approaches. Keywords Phicons, ubiquitous computing, augmented reality, mixed reality, multimodal interfaces, tangible interfaces, invisible inter...
Creating tangible interfaces by augmenting physical objects with multimodal language
- Proc ACM Conf. Intelligent User Interfaces
"... Rasa is a tangible augmented reality environment that digitally enhances the existing paper-based command and control capability in a military command post. By observing and understanding the users’ speech, pen, and touch-based multimodal language, Rasa computationally augments the physical objects ..."
Abstract
-
Cited by 38 (2 self)
- Add to MetaCart
(Show Context)
Rasa is a tangible augmented reality environment that digitally enhances the existing paper-based command and control capability in a military command post. By observing and understanding the users’ speech, pen, and touch-based multimodal language, Rasa computationally augments the physical objects on a command post map, linking these items to digital representations of the same—for example, linking a paper map to the world and Post-it ™ notes to military units. Herein, we give a thorough account of Rasa’s underlying multiagent framework, and its recognition, understanding, and multimodal integration components. Moreover, we examine five properties of language—generativity, comprehensibility, compositionality, referentiality, and, at times, persistence—that render it suitable as an augmentation approach, contrasting these properties to those of other augmentation methods. It is these properties of language that allow users of Rasa to augment physical objects, transforming them into tangible interfaces.
ICARE software components for rapidly developing multimodal interfaces
, 2004
"... Although several real multimodal systems have been built, their development still remains a difficult task. In this paper we address this problem of development of multimodal interfaces by describing a component-based approach, called ICARE, for rapidly developing multimodal interfaces. ICARE stands ..."
Abstract
-
Cited by 36 (15 self)
- Add to MetaCart
(Show Context)
Although several real multimodal systems have been built, their development still remains a difficult task. In this paper we address this problem of development of multimodal interfaces by describing a component-based approach, called ICARE, for rapidly developing multimodal interfaces. ICARE stands for Interaction-CARE (Complementarity Assignment Redundancy Equivalence). Our component-based approach relies on two types of software components. Firstly ICARE elementary components include Device components and Interaction Language components that enable us to develop pure modalities. The second type of components, called Composition components, define combined usages of modalities. Reusing and assembling ICARE components enable rapid development of multimodal interfaces. We have developed several multimodal systems using ICARE and we illustrate the discussion using one of them: the FACET simulator of the Rafale French military plane cockpit.
Clover Architecture for Groupware
- in the Proceedings of the 2002 ACM conference on Computer supported cooperative work, 2002, ACM
, 2002
"... In this paper we present the Clover architectural model, a new conceptual architectural model for groupware. Our model results from the combination of the layer approach of Dewan's generic architecture with the functional decomposition of the Clover design model. The Clover design model defines ..."
Abstract
-
Cited by 34 (0 self)
- Add to MetaCart
(Show Context)
In this paper we present the Clover architectural model, a new conceptual architectural model for groupware. Our model results from the combination of the layer approach of Dewan's generic architecture with the functional decomposition of the Clover design model. The Clover design model defines three classes of services that a groupware application may support, namely, production, communication and coordination services. The three classes of services can be found in each functional layer of our model. Our model is illustrated with a working system, the CoVitesse system, its software being organized according to our Clover architectural model.
Systems, Interactions, and Macrotheory
"... A significant proportion of early HCI research was guided by one very clear vision: that the existing theory base in psychology and cognitive science could be developed to yield engineering tools for use in the interdisciplinary context of HCI design. While interface technologies and heuristic metho ..."
Abstract
-
Cited by 28 (7 self)
- Add to MetaCart
A significant proportion of early HCI research was guided by one very clear vision: that the existing theory base in psychology and cognitive science could be developed to yield engineering tools for use in the interdisciplinary context of HCI design. While interface technologies and heuristic methods for behavioral evaluation have rapidly advanced in both capability and breadth of application, progress toward deeper theory has been modest, and some now believe it to be unnecessary. A case is presented for developing new forms of theory, based around generic “systems of interactors. ” An overlapping, layered structure of macro- and microtheories could then serve an explanatory role, and could also bind together contributions from the different disciplines. Novel routes to formalizing and applying such theories provide a host of interesting and tractable problems for future basic research in HCI.
A scalable formal method for design and automatic checking of user interfaces
- ACM Transactions on Software Engineering and Methodology
, 2005
"... ABSTRACT: The paper addresses the formal specification, design and implementation of the behavioral component of graphical user interfaces. The complex sequences of visual events and actions that constitute dialogs are speci-fied by means of modular, communicating grammars called VEG (Visual Event G ..."
Abstract
-
Cited by 27 (0 self)
- Add to MetaCart
ABSTRACT: The paper addresses the formal specification, design and implementation of the behavioral component of graphical user interfaces. The complex sequences of visual events and actions that constitute dialogs are speci-fied by means of modular, communicating grammars called VEG (Visual Event Grammars), which extend traditional BNF grammars to make them more convenient to model dialogs. A VEG specification is independent of the actual layout of the GUI, but it can easily be integrated with various layout design toolkits. Moreover, a VEG specification may be verified with the model checker SPIN, in order to test consistency and correctness, to detect deadlocks and unreachable states, and also to generate test cases for validation purposes. Efficient code is automatically generated by the VEG toolkit, based on compiler technology. Realistic applica-tions have been specified, verified and implemented, like a Notepad-style editor, a graph construction library and a large real application to medical software. It is also argued that VEG can be used to specify and test voice interfaces and multi-modal dialogs. The major contribution of our work is blending together a set of features coming from GUI design, compilers, software engineering and formal verification. Even though we do not claim novelty in each of the techniques adopted for VEG, they have been united into a toolkit supporting all GUI design phases, i.e., specification, design, verification and validation, linking to applications and coding. 1
ICARE: A Component-Based Approach for the Design and Development of Multimodal Interfaces
- In Extended Abstracts of CHI’04
, 2004
"... Multimodal interactive systems support multiple interaction techniques such as the synergistic use of speech, gesture and eye gaze tracking. The flexibility they offer results in an increased complexity that current software development tools do not address appropriately. In this paper we describe a ..."
Abstract
-
Cited by 24 (11 self)
- Add to MetaCart
(Show Context)
Multimodal interactive systems support multiple interaction techniques such as the synergistic use of speech, gesture and eye gaze tracking. The flexibility they offer results in an increased complexity that current software development tools do not address appropriately. In this paper we describe a component-based approach, called ICARE, for specifying and developing multimodal interfaces. Our approach relies on two types of components: (i) elementary components that describe pure modalities and (ii) composition components (Complementarity, Redundancy and Equivalence) that enable the designer to specify combined usage of modalities. The designer graphically assembles the ICARE components and the code of the multimodal user interface is automatically generated. Although the ICARE platform is not fully developed, we illustrate the applicability of the approach with the implementation of two multimodal systems: MEMO a GeoNote system and MID, a multimodal identification interface.