Results 1 - 10
of
30
When do we interact multimodally? Cognitive load and multimodal communication patterns
- In Proc. of International Conference on Multimodal Interfaces
, 2004
"... Mobile usage patterns often entail high and fluctuating levels of difficulty as well as dual tasking. One major theme explored in this research is whether a flexible multimodal interface supports users in managing cognitive load. Findings from this study reveal that multimodal interface users sponta ..."
Abstract
-
Cited by 59 (4 self)
- Add to MetaCart
(Show Context)
Mobile usage patterns often entail high and fluctuating levels of difficulty as well as dual tasking. One major theme explored in this research is whether a flexible multimodal interface supports users in managing cognitive load. Findings from this study reveal that multimodal interface users spontaneously respond to dynamic changes in their own cognitive load by shifting to multimodal communication as load increases with task difficulty and communicative complexity. Given a flexible multimodal interface, users ’ ratio of multimodal (versus unimodal) interaction increased substantially from 18.6 % when referring to established dialogue context to 77.1 % when required to establish a new context, a +315 % relative increase. Likewise, the ratio of users’ multimodal interaction increased significantly as the tasks became more difficult, from 59.2 % during low difficulty tasks, to 65.5%
Multimodal Interfaces: A Survey of Principles, Models and Frameworks
"... Abstract. The grand challenge of multimodal interface creation is to build reliable processing systems able to analyze and understand multiple communication means in real-time. This opens a number of associated issues covered by this chapter, such as heterogeneous data types fusion, architectures fo ..."
Abstract
-
Cited by 20 (4 self)
- Add to MetaCart
(Show Context)
Abstract. The grand challenge of multimodal interface creation is to build reliable processing systems able to analyze and understand multiple communication means in real-time. This opens a number of associated issues covered by this chapter, such as heterogeneous data types fusion, architectures for real-time processing, dialog management, machine learning for multimodal interaction, modeling languages, frameworks, etc. This chapter does not intend to cover exhaustively all the issues related to multimodal interfaces creation and some hot topics, such as error handling, have been left aside. The chapter starts with the features and advantages associated with multimodal interaction, with a focus on particular findings and guidelines, as well as cognitive foundations underlying multimodal interaction. The chapter then focuses on the driving theoretical principles, time-sensitive software architectures and multimodal fusion and fission issues. Modeling of multimodal interaction as well as tools allowing rapid creation of multimodal interfaces are then presented. The article concludes with an outline of the current state of multimodal interaction research in Switzerland, and also summarizes the major future challenges in the field. 1
Individual Differences in Multimodal Integration Patterns: What Are They And Why Do They Exist
- Proc. CHI’05
, 2005
"... Techniques for information fusion are at the heart of multimodal system design. To develop new user-adaptive approaches for multimodal fusion, the present research investigated the stability and underlying cause of major individual differences that have been documented between users in their multimo ..."
Abstract
-
Cited by 10 (0 self)
- Add to MetaCart
(Show Context)
Techniques for information fusion are at the heart of multimodal system design. To develop new user-adaptive approaches for multimodal fusion, the present research investigated the stability and underlying cause of major individual differences that have been documented between users in their multimodal integration pattern. Longitudinal data were collected from 25 adults as they interacted with a map system over six weeks. Analyses of 1,100 multimodal constructions revealed that everyone had a dominant integration pattern, either simultaneous or sequential, which was 95-96 % consistent and remained stable over time. In addition, coherent behavioral and linguistic differences were identified between these two groups. Whereas performance speed was comparable, sequential
Optimization in Multimodal Interpretation
- In Proceedings of 42nd Annual Meeting of Association for Computational Linguistics (ACL
, 2004
"... In a multimodal conversation, the way users communicate with a system depends on the available interaction channels and the situated context (e.g., conversation focus, visual feedback). These dependencies form a rich set of constraints from various perspectives such as temporal alignments between di ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
(Show Context)
In a multimodal conversation, the way users communicate with a system depends on the available interaction channels and the situated context (e.g., conversation focus, visual feedback). These dependencies form a rich set of constraints from various perspectives such as temporal alignments between different modalities, coherence of conversation, and the domain semantics. There is strong evidence that competition and ranking of these constraints is important to achieve an optimal interpretation. Thus, we have developed an optimization approach for multimodal interpretation, particularly for interpreting multimodal references. A preliminary evaluation indicates the effectiveness of this approach, especially for complex user inputs that involve multiple referring expressions in a speech utterance and multiple gestures. 1
Smart Environments for Collaborative Design, Implementation, and Interpretation of Scientific Experiments
"... Ambient intelligence promises to enable humans to smoothly interact with their environment, mediated by computer technology. In the literature on ambient intelligence, empirical scientists are not often mentioned. Yet they form an interesting target group for this technology. In this position paper, ..."
Abstract
-
Cited by 6 (3 self)
- Add to MetaCart
Ambient intelligence promises to enable humans to smoothly interact with their environment, mediated by computer technology. In the literature on ambient intelligence, empirical scientists are not often mentioned. Yet they form an interesting target group for this technology. In this position paper, we describe a project aimed at realising an ambient intelligence environment for face-to-face meetings of researchers with different academic backgrounds involved in molecular biology “omics ” experiments. In particular, microarray experiments are a focus of attention because these experiments require multidisciplinary collaboration for their design, analysis, and interpretation. Such an environment is characterised by a high degree of complexity that has to be mitigated by ambient intelligence technology. By experimenting in a real-life setting, we will learn more about life scientists as a user group. 1
Multi-channel and multi-modal interactions in Emarketing: Toward a generic architecture for integration and experimentation
- HCI International 2005, Las Végas, Lawrence Erlbaum Associates editor
, 2005
"... Multi-modality is a domain which is study since several years in HCI area. In this paper, we approach this domain under a new point of view, those of the e-Marketing, and so in an industrial framework. Thus, we investigate real issues of E-Marketing with well-known technical in HCI. The multi-modal ..."
Abstract
-
Cited by 6 (5 self)
- Add to MetaCart
Multi-modality is a domain which is study since several years in HCI area. In this paper, we approach this domain under a new point of view, those of the e-Marketing, and so in an industrial framework. Thus, we investigate real issues of E-Marketing with well-known technical in HCI. The multi-modal notion can be compared to the multichannel one used in e-Marketing. That’s why a part of our work is dedicated to define and describe what is behind the terms of multi-modal and multi-channel in the E-Marketing area. Our collaboration with the Cité Numérique, a subsidiary of the 3 Suisses International Company, a large group of Direct Marketing, carries out us to consider real issues. This collaboration also led us to study concrete scenario in real situations, at large scale. 1
A Framework for Evaluating Multimodal Integration by Humans and a Role for Embodied Conversational Agents
- In ICMI ’04: Proceedings of the 6th international conference on Multimodal interfaces
, 2004
"... One of the implicit assumptions of multi-modal interfaces is that human-computer interaction is significantly facilitated by providing multiple input and output modalities. Surprisingly, however, there is very little theoretical and empirical research testing this assumption in terms of the presenta ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
One of the implicit assumptions of multi-modal interfaces is that human-computer interaction is significantly facilitated by providing multiple input and output modalities. Surprisingly, however, there is very little theoretical and empirical research testing this assumption in terms of the presentation of multimodal displays to the user. The goal of this paper is provide both a theoretical and empirical framework for addressing this important issue. Two contrasting models of human information processing are formulated and contrasted in experimental tests. According to integration models, multiple sensory influences are continuously combined during categorization, leading to perceptual experience and action. The Fuzzy Logical Model of Perception (FLMP) assumes that processing occurs in three successive but overlapping stages: evaluation, integration, and decision (Massaro, 1998). According to nonintegration models, any perceptual experience and action results from only a single sensory influence. These models are tested in expanded factorial designs in which two input modalities are varied independently of one another in a factorial design and each modality is also presented alone. Results from a variety of experiments on speech, emotion, and gesture support the predictions of the FLMP. Baldi, an embodied conversational agent, is described and implications for applications of multimodal interfaces are discussed.
O.: Children’s Gesture and Speech in Conversation with 3D Characters
- Proceedings of HCI International 2005
, 2005
"... This paper deals with the multimodal interaction between young users (children and teenagers) and a 3D Embodied Conversational Agent representing the author HC Andersen. We present the results of user tests we conducted on the first prototype of this conversational system and discuss their implicati ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
This paper deals with the multimodal interaction between young users (children and teenagers) and a 3D Embodied Conversational Agent representing the author HC Andersen. We present the results of user tests we conducted on the first prototype of this conversational system and discuss their implications for the design of the second prototype and for similar systems.
Benchmarking Fusion Engines of Multimodal Interactive Systems
"... This article proposes an evaluation framework to benchmark the performance of multimodal fusion engines. The paper first introduces different concepts and techniques associated with multimodal fusion engines and further surveys recent implementations. It then discusses the importance of evaluation a ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
(Show Context)
This article proposes an evaluation framework to benchmark the performance of multimodal fusion engines. The paper first introduces different concepts and techniques associated with multimodal fusion engines and further surveys recent implementations. It then discusses the importance of evaluation as a mean to assess fusion engines, not only from the user perspective, but also at a performance level. The article further proposes a benchmark and a formalism to build testbeds for assessing multimodal fusion engines. In its last section, our current fusion engine and the associated system HephaisTK are evaluated thanks to the evaluation framework proposed in this article. The article concludes with a discussion on the proposed quantitative evaluation, suggestions to build useful testbeds, and proposes some future improvements.
The Functions and Corresponding
- Processes Involved with Field-Level Comptrollership , Master's Thesis, Naval Postgraduate School
, 1977
"... Comparisons of different measurements for monitoring diabetic cats treated with porcine insulin zinc suspension ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Comparisons of different measurements for monitoring diabetic cats treated with porcine insulin zinc suspension