Results 1 - 10
of
150
Fluid interaction with high-resolution wall-size displays
- UIST 2001, ACM Press
"... This paper describes new interaction techniques for direct pen-based interaction on the Interactive Mural, a large (6’x3.5’) high resolution (64 dpi) display. They have been tested in a digital brainstorming tool that has been used by groups of professional product designers. Our “interactive wall ” ..."
Abstract
-
Cited by 169 (13 self)
- Add to MetaCart
(Show Context)
This paper describes new interaction techniques for direct pen-based interaction on the Interactive Mural, a large (6’x3.5’) high resolution (64 dpi) display. They have been tested in a digital brainstorming tool that has been used by groups of professional product designers. Our “interactive wall ” metaphor for interaction has been guided by several goals: to support both free-hand sketching and high-resolution materials, such as images, 3D models and GUI application windows; to present a visual appearance that does not clutter the content with control devices; and to support fluid interaction, which minimizes the amount of attention demanded and interruption due to the mechanics of the interface. We have adapted and extended techniques that were developed for electronic whiteboards and generalized the use of the FlowMenu to execute a wide variety of actions in a single pen stroke. While this techniques were designed for a brainstorming tool, they are very general and can be used in a wide variety of application domains using interactive surfaces.
Multimodal human computer interaction: A survey
, 2005
"... In this paper we review the major approaches to Multimodal Human Computer Interaction, giving an overview of the field from a computer vision perspective. In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition and emotion in audio). We discuss user ..."
Abstract
-
Cited by 119 (3 self)
- Add to MetaCart
(Show Context)
In this paper we review the major approaches to Multimodal Human Computer Interaction, giving an overview of the field from a computer vision perspective. In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for Multimodal Human Computer Interaction (MMHCI) research.
Multimodal Interfaces That Process What Comes Naturally
- Communications of the ACM
, 2000
"... this article, we summarize the nature of new multimodal systems and how they work, with a focus on multimodal speech and pen-based systems. The primary reasons for building multimodal systems are outlined, including expansion of the accessibility of computing for diverse users, support for new forms ..."
Abstract
-
Cited by 93 (2 self)
- Add to MetaCart
(Show Context)
this article, we summarize the nature of new multimodal systems and how they work, with a focus on multimodal speech and pen-based systems. The primary reasons for building multimodal systems are outlined, including expansion of the accessibility of computing for diverse users, support for new forms of computing not available in the past, enhancement of performance stability and robustness, and improved expressive 3
When do we interact multimodally? Cognitive load and multimodal communication patterns
- In Proc. of International Conference on Multimodal Interfaces
, 2004
"... Mobile usage patterns often entail high and fluctuating levels of difficulty as well as dual tasking. One major theme explored in this research is whether a flexible multimodal interface supports users in managing cognitive load. Findings from this study reveal that multimodal interface users sponta ..."
Abstract
-
Cited by 59 (4 self)
- Add to MetaCart
(Show Context)
Mobile usage patterns often entail high and fluctuating levels of difficulty as well as dual tasking. One major theme explored in this research is whether a flexible multimodal interface supports users in managing cognitive load. Findings from this study reveal that multimodal interface users spontaneously respond to dynamic changes in their own cognitive load by shifting to multimodal communication as load increases with task difficulty and communicative complexity. Given a flexible multimodal interface, users ’ ratio of multimodal (versus unimodal) interaction increased substantially from 18.6 % when referring to established dialogue context to 77.1 % when required to establish a new context, a +315 % relative increase. Likewise, the ratio of users’ multimodal interaction increased significantly as the tasks became more difficult, from 59.2 % during low difficulty tasks, to 65.5%
Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality.
- In Int. Conf. on Multimodal Interfaces,
, 2003
"... ABSTRACT We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken langua ..."
Abstract
-
Cited by 42 (8 self)
- Add to MetaCart
(Show Context)
ABSTRACT We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken language, and referential agents. The referential agents employ visible or invisible volumes that can be attached to 3D trackers in the environment, and which use a time-stamped history of the objects that intersect them to derive statistics for ranking potential referents. We discuss the means by which the system supports mutual disambiguation of these modalities and information sources, and show through a user study how mutual disambiguation accounts for over 45% of the successful 3D multimodal interpretations. An accompanying video demonstrates the system in action.
Making Agents Acceptable To People
"... Because ever more powerful intelligent agents will interact with people in increasingly sophisticated and important ways, greater attention must be given to the technical and social aspects of how to make agents acceptable to people [87]. The technical challenge is to devise a computational struct ..."
Abstract
-
Cited by 39 (25 self)
- Add to MetaCart
Because ever more powerful intelligent agents will interact with people in increasingly sophisticated and important ways, greater attention must be given to the technical and social aspects of how to make agents acceptable to people [87]. The technical challenge is to devise a computational structure that guarantees that from
ICARE software components for rapidly developing multimodal interfaces
, 2004
"... Although several real multimodal systems have been built, their development still remains a difficult task. In this paper we address this problem of development of multimodal interfaces by describing a component-based approach, called ICARE, for rapidly developing multimodal interfaces. ICARE stands ..."
Abstract
-
Cited by 36 (15 self)
- Add to MetaCart
Although several real multimodal systems have been built, their development still remains a difficult task. In this paper we address this problem of development of multimodal interfaces by describing a component-based approach, called ICARE, for rapidly developing multimodal interfaces. ICARE stands for Interaction-CARE (Complementarity Assignment Redundancy Equivalence). Our component-based approach relies on two types of software components. Firstly ICARE elementary components include Device components and Interaction Language components that enable us to develop pure modalities. The second type of components, called Composition components, define combined usages of modalities. Reusing and assembling ICARE components enable rapid development of multimodal interfaces. We have developed several multimodal systems using ICARE and we illustrate the discussion using one of them: the FACET simulator of the Rafale French military plane cockpit.
Sketch Understanding in Design: Overview of Work at the MIT AI
- Eds.) AAAI Spring Symposium on Sketch Understanding
, 2002
"... Abstract We have been working on a variety of projects aimed at providing natural forms of interaction with computers, centered primarily around the use of sketch understanding. We argue that sketch understanding is a knowledge-based task, i.e., one that requires various degrees of understanding of ..."
Abstract
-
Cited by 34 (6 self)
- Add to MetaCart
Abstract We have been working on a variety of projects aimed at providing natural forms of interaction with computers, centered primarily around the use of sketch understanding. We argue that sketch understanding is a knowledge-based task, i.e., one that requires various degrees of understanding of the act of sketching, of the domain, and of the task being supported. In the long term we aim to use sketching as part of a design environment in which design rationale capture is a natural and, ideally, almost effortless byproduct of design. Natural Interaction We suggest that the problem with software is not that it needs a good user interface, but that it needs to have no user interface. Interacting with software should-ideallyfeel as natural, informal, rich, and easy as working with a human assistant. As a motivating example, consider a hand-drawn sketch of a design for a circuit breaker ( Our long term goal is to enable computers to do just what people do when presented with these sorts of sketches and explanations: We want to be able to draw a sketch like that in While this is clearly a tall order, it is also one crucial step toward a much more natural style of interaction with computers. The work in our group is aimed at doing this, making it possible for people involved in design and planning tasks to sketch, gesture, and talk about their ideas (rather than type, point, and click), and have the computer understand their messy freehand sketches, their casual gestures, and the fragmentary utterances that are part and parcel of such interaction. One key to this lies in appropriate use of each of the means of interaction: Geometry is best sketched, behavior and rationale are best described in words and gestures. A second key lies in the claim that interaction will be effortless only if the listener is smart: effortless interaction and invisible interfaces must be knowledge-based. If it is to make sense of informal sketches, the listener has to understand something about the domain and something about how sketches are drawn. This paper provides an overview of nine current pieces of work at the MIT AI Lab in the Design Rationale Capture group on the sketch recognition part of this overall goal. Early Processing The focus in this part of our work is on the fwst step in sketch understanding: interpreting the pixeis produced by the user's strokes, producing low level geometric descriptions such as lines, ovals, rectangles, arbitrary polylines, curves and their combinations. Conversion from pixels to geometric objects provides a more compact 24
Speech and Sketching for Multimodal Design
- In Proceedings of the 9th International Conference on Intelligent User Interfaces
, 2004
"... While sketches are commonly and effectively used in the early stages of design, some information is far more easily conveyed verbally than by sketching. In response, we have combined sketching with speech, enabling a more natural form of communication. We studied the behavior of people sketching and ..."
Abstract
-
Cited by 30 (6 self)
- Add to MetaCart
(Show Context)
While sketches are commonly and effectively used in the early stages of design, some information is far more easily conveyed verbally than by sketching. In response, we have combined sketching with speech, enabling a more natural form of communication. We studied the behavior of people sketching and speaking, and from this derived a set of rules for segmenting and aligning the signals from both modalities. Once the inputs are aligned, we use both modalities in interpretation. The result is a more natural interface to our system.
The beach application model and software framework for synchronous collaboration in ubiquitous computing environments
- Journal of Systems and Software
, 2004
"... In this paper, a conceptual model for synchronous applications in ubiquitous computing environments is proposed. To test its applicability, it was used to structure the architecture of the BEACH software framework that is the basis for the software infrastructure of i-LAND (the ubiquitous computing ..."
Abstract
-
Cited by 28 (2 self)
- Add to MetaCart
In this paper, a conceptual model for synchronous applications in ubiquitous computing environments is proposed. To test its applicability, it was used to structure the architecture of the BEACH software framework that is the basis for the software infrastructure of i-LAND (the ubiquitous computing environment at FhG-IPSI). The BEACH framework provides the functionality for synchronous cooperation and interaction with roomware components, i.e. room elements with integrated information technology. To show how the BEACH model and framework can be applied, the design of a sample application is explained. Also, the BEACH model is positioned against related work. In conclusion, we provide our experiences with the current implementation.