Results 1 - 10
of
14
Multimodal human computer interaction: A survey
, 2005
"... In this paper we review the major approaches to Multimodal Human Computer Interaction, giving an overview of the field from a computer vision perspective. In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition and emotion in audio). We discuss user ..."
Abstract
-
Cited by 119 (3 self)
- Add to MetaCart
(Show Context)
In this paper we review the major approaches to Multimodal Human Computer Interaction, giving an overview of the field from a computer vision perspective. In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for Multimodal Human Computer Interaction (MMHCI) research.
Multimodal interfaces: Challenges and perspectives
- Journal of Ambient Intelligence and Smart Environments
, 2009
"... Abstract. The development of interfaces has been a technology-driven process. However, the newly developed multimodal interfaces are using recognition-based technologies that must interpret human-speech, gesture, gaze, movement patterns, and other behavioral cues. As a result, the interface design ..."
Abstract
-
Cited by 11 (0 self)
- Add to MetaCart
(Show Context)
Abstract. The development of interfaces has been a technology-driven process. However, the newly developed multimodal interfaces are using recognition-based technologies that must interpret human-speech, gesture, gaze, movement patterns, and other behavioral cues. As a result, the interface design requires a human-centered approach. In this paper we review the major approaches to multimodal Human Computer Interaction, giving an overview the user and task modeling, and to the multimodal fusion. We highlight the challenges, open issues, and the future trends in multimodal interfaces research.
A Speech-based and Auditory Ubiquitous Office Environment
- Proc. of 10th International Conference on Speech and Computer (SPECOM 2005
, 2005
"... This article introduces how speech and non-speech audio can be used in ubiquitous computing office environments. We describe an iterative development of an augmented office environment that helps people in their everyday tasks in office settings. Architectural as well as interaction issues are cover ..."
Abstract
-
Cited by 7 (6 self)
- Add to MetaCart
(Show Context)
This article introduces how speech and non-speech audio can be used in ubiquitous computing office environments. We describe an iterative development of an augmented office environment that helps people in their everyday tasks in office settings. Architectural as well as interaction issues are covered in this paper. We discuss how we have addressed the problems of multimodal data fusion, concurrency and continuity, dynamic content generation and output control, distribution and modularity, which are key elements in building ubiquitous speech based systems. 1.
Towards Ubiquitous Task Management
- 8 th International Conference on Spoken Language Processing, Jeju Island, Korea
, 2004
"... In the near future people will be surrounded by intelligent devices embedded in everyday objects where the knowledge and understanding of device attributes and capabilities will be a key enabler. This paper describes the current state of our research in designing distributed knowledge based devices ..."
Abstract
-
Cited by 4 (4 self)
- Add to MetaCart
(Show Context)
In the near future people will be surrounded by intelligent devices embedded in everyday objects where the knowledge and understanding of device attributes and capabilities will be a key enabler. This paper describes the current state of our research in designing distributed knowledge based devices as a solution to adapt spoken dialogue systems within ambient intelligence. In this context a spoken dialogue system is a computational entity that allows universal access to ambient intelligence for anyone, anywhere, at anytime to use any device through any media. Our aim is to build knowledge-based devices to enable dynamic adaptation of the components integrated in the dialogue system architecture. An example focusing household appliances is depicted. 1.
N.: Ubiquitous Knowledge Modeling for Dialogue Systems, to appear
- in 8 th International Conference on Enterprise Information Systems, Paphos, Cyprus
, 2006
"... Abstract: The main general problem that we want to address is the reconfiguration of dialogue systems to work with a generic plug-and-play device. This paper describes our research in designing knowledge-based everyday devices that can be dynamically adapted to spoken dialogue systems. We propose a ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
Abstract: The main general problem that we want to address is the reconfiguration of dialogue systems to work with a generic plug-and-play device. This paper describes our research in designing knowledge-based everyday devices that can be dynamically adapted to spoken dialogue systems. We propose a model for ubiquitous knowledge representation that enables the spoken dialogue system to be aware of the devices belonging to the domain and of the tasks they provide. We consider that each device can be augmented with computational capabilities in order to support its own knowledge model. A knowledge--based broker adapts the spoken dialogue system to deal with an arbitrary set of devices. The knowledge integration process between the knowledge models of the devices and the knowledge model of the broker is depicted. This process was tested in the home environment domain. A Spoken Dialogue System (SDS) should be a computational entity that allows access to any device by anyone, anywhere, at anytime, through any media, allowing its user to focus on the task, not on
N.: A Framework to Integrate Ubiquitous Knowledge Modeling, (to appear
- 5 th International Conference on Language Resources and Evaluation
, 2006
"... This paper describes our contribution to let end users configure mixed-initiative spoken dialogue systems to suit their personalized goals. The main problem that we want to address is the reconfiguration of spoken language dialogue systems to deal with generic plug and play artifacts. Such reconfigu ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
This paper describes our contribution to let end users configure mixed-initiative spoken dialogue systems to suit their personalized goals. The main problem that we want to address is the reconfiguration of spoken language dialogue systems to deal with generic plug and play artifacts. Such reconfiguration can be seen as a portability problem and is a critical research issue. In order to solve this problem we describe a hybrid approach to design ubiquitous domain models that allows the dialogue system to perform recognition of available tasks on the fly. Our approach considers two kinds of domain knowledge: the global knowledge and the local knowledge. The global knowledge, that is modeled using a top-down approach, is associated at design time with the dialogue system itself. The local knowledge, that is modeled using a bottom-up approach, is defined with each one of the artifacts. When an artifact is activated or deactivated, a bilateral process, supported by a broker, updates the domain knowledge considering the artifact local knowledge. We assume that everyday artifacts are augmented with computational capabilities and semantic descriptions supported by their own knowledge model. A case study focusing a microwave oven is depicted. 1.
Hybrid Knowledge Modeling for Ambient Intelligence
- In Ninth European Research Consortium for Informatics and Mathematics (ERCIM) Workshop User Interfaces for All (UI4ALL
, 2006
"... Abstract. This paper describes our research in enhance everyday devices as a solution to adapt Spoken Dialogue Systems (SDS) within ambient intelligence. In this context, a SDS enables universal access to ambient intelligence for anyone, anywhere at anytime, allowing the access to any device through ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract. This paper describes our research in enhance everyday devices as a solution to adapt Spoken Dialogue Systems (SDS) within ambient intelligence. In this context, a SDS enables universal access to ambient intelligence for anyone, anywhere at anytime, allowing the access to any device through any media or language. The main problem that we want to address is the spontaneous configuration of SDS to deal with a set of arbitrary plug and play devices. Such problem is resumed as a portability feature and is a critical research issue. We propose a hybrid approach to design ubiquitous domain models to allow the SDS to recognize on-the-fly the available devices and tasks they provide. When a device is activated or deactivated, a broker’s knowledge model is updated from device’s knowledge model using a knowledge integration process. This process was tested in the home environment represented by a set of devices. Keywords: Ambient Intelligence, Spoken Dialogue System. 1
Utterance Planning in an Agent-based Dialogue System
- In Proceedings of the 3rd International Conference on Natural Language Generation, University of
, 2004
"... This paper describes the Response Planning and Generation components of the Athos framework implemented in the DUMAS project. DUMAS inves-tigates adaptive multilingual interaction techniques to handle both spoken and text input and to provide coordinated linguistic responses to the user [1, 3]. The ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
This paper describes the Response Planning and Generation components of the Athos framework implemented in the DUMAS project. DUMAS inves-tigates adaptive multilingual interaction techniques to handle both spoken and text input and to provide coordinated linguistic responses to the user [1, 3]. The components we describe for response planning and generation functions are implemented as Jaspis agents [2]. AthosMail was developed in the DUMAS project. This is a multilingual speech-based application which allows a user to access his/her mailbox us-ing a telephone. Another application under development, AthosNews, will provide a speech interface to newspapers for visually impaired users, and provide both browse and search functionality. 1 Jaspis overview Jaspis is a framework for adaptive speech applications, designed to support distributed, context-sensitive applications that adapt to the user and the en-vironment. It is designed with multilingual applications in mind. The Jaspis
Editorial
"... Group communications Spring is here and it’s a time get NEW things under way. But before introducing new faces and images, I need to thank old hands. In particular, Greg Leplatre has been acting as the moderator of BCS-HCI News for nearly five years now, and has finally decided that it is time to s ..."
Abstract
- Add to MetaCart
Group communications Spring is here and it’s a time get NEW things under way. But before introducing new faces and images, I need to thank old hands. In particular, Greg Leplatre has been acting as the moderator of BCS-HCI News for nearly five years now, and has finally decided that it is time to step down. I am sure that every one of us has benefited directly from information gleaned from the jiscmail mailing list, so we owe a huge debt of thanks for Greg’s quiet work behind the scenes moderating our mail each week. And remember “Don’t ask what the group can do for you, ask what you can do for the group ” – we need new volunteers to contribute to all our activities. The new logo is rolling out and you will have seen it on publicity for HCI 2007 and CREATE 2007. We are also learning some of the pragmatic aspects of using the logo in different settings and on different scales (yes, even logos have to adapt to context!). To remind you, the new (black & white version of the) ‘full ’ logo is: We call it the ‘full logo ’ because the strap line ‘A Specialist Group of the British Computer Society ’ is intended for a general audience, who may not recognise the acronym ‘BCS’. For a simpler visual balance you may prefer our ‘shortname ’ logo: This works well when it is the main logo in text or on a website. But it may be problematic when it needs reducing as the text becomes hard to read. For these settings, we have a simplified logo that omits the strapline: For example, when we are minor sponsor to conferences organised by others, we need to use the interaction ‘blank ’ logo, as below. As well as these logos, we have a supply of related logos that we can work with to link in our other communication assets. We even have a draft logo for future conferences. So look out for our new image as we roll it out. Its next outing is likely to be on www.bcs-hci.org.uk, which is currently getting a facelift and new content management software.