Results 1 - 10
of
16
How to evaluate public displays
- In Proceedings of the 2012 International Symposium on Pervasive Displays (PerDis '12). ACM
, 2012
"... After years in the lab, interactive public displays are finding their way into public spaces, shop windows, and public institutions. They are equipped with a multitude of sensors as well as (multi-) touch surfaces allowing not only the audience to be sensed, but also their effectiveness to be measur ..."
Abstract
-
Cited by 13 (10 self)
- Add to MetaCart
(Show Context)
After years in the lab, interactive public displays are finding their way into public spaces, shop windows, and public institutions. They are equipped with a multitude of sensors as well as (multi-) touch surfaces allowing not only the audience to be sensed, but also their effectiveness to be measured. The lack of generally accepted de-sign guidelines for public displays and the fact that there are many different objectives (e.g., increasing attention, optimizing interac-tion times, finding the best interaction technique) make it a chal-lenging task to pick the most suitable evaluation method. Based on a literature survey and our own experiences, this paper provides an overview of study types, paradigms, and methods for evaluation both in the lab and in the real world. Following a discussion of de-sign challenges, we provide a set of guidelines for researchers and practitioners alike to be applied when evaluating public displays.
Awareness Displays and Social Motivation for Coordinating Communication
"... informs doi 10.1287/isre.1080.0175 ..."
(Show Context)
SidePoint: A Peripheral Knowledge Panel for Presentation Slide Authoring
"... Presentation authoring is an important activity, but often requires the secondary task of collecting the information and media necessary for both slides and speech. Integration of implicit search and peripheral displays into presentation authoring tools may reduce the effort to satisfy not just acti ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Presentation authoring is an important activity, but often requires the secondary task of collecting the information and media necessary for both slides and speech. Integration of implicit search and peripheral displays into presentation authoring tools may reduce the effort to satisfy not just active needs the author is aware of, but also latent needs that she is not aware of until she encounters content of perceived value. We develop SidePoint, a peripheral panel that supports presentation authoring by showing concise knowledge items relevant to the slide content. We study SidePoint as a technology probe to examine the benefits and issues associated with peripheral knowledge panels for presentation authoring. Our results show that peripheral knowledge panels have the potential to satisfy both types of needs in ways that transform presentation authoring for the better. Author Keywords Presentation authoring; peripheral displays; natural language processing
Automated Language-Based Feedback FOR TEAMWORK BEHAVIORS
, 2009
"... While most collaboration technologies are concerned with supporting task accomplishment, members of work teams do not always have the skills necessary for effective teamwork. In this research I propose that providing dynamic feedback generated by automated analysis of language behavior can help team ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
While most collaboration technologies are concerned with supporting task accomplishment, members of work teams do not always have the skills necessary for effective teamwork. In this research I propose that providing dynamic feedback generated by automated analysis of language behavior can help team members reflect on and subsequently improve their teamwork behaviors. This prospect is developed based on research in multiple disciplines, including teamwork effectiveness and social behaviors, feedback for training and regulating behaviors, and use of language in group conversations. To support this research, I directed the design and development of GroupMeter, a web-based chat system that analyzes conversations using a dictionarybased word count technique and visualizes indicators of language. I present a set of requirements for the GroupMeter system and the iterative process in which its design evolved. Findings from experiment 1 included a set of linguistic indicators that may serve as a useful source of automated feedback, such as agreement words and selfreferences, and that were embedded into the GroupMeter system. Experiments
2007) "Introduction to This Special Issue on Awareness Systems Design
- Human-Computer Interaction
"... ..."
Attention by proxy? Issues in audience awareness for webcasts to distributed groups
- In Proceeding of the TwentySixth Annual SIGCHI Conference on Human Factors in Computing Systems CHI ‗08
, 2008
"... grouplab.cpsc.ucalgary.ca ..."
(Show Context)
SEE PROFILE
"... Effects of four types of non-obtrusive feedback on computer behaviour, task performance and comfort ..."
Abstract
- Add to MetaCart
(Show Context)
Effects of four types of non-obtrusive feedback on computer behaviour, task performance and comfort
Experimentation, Human Factors
"... After years in the lab, interactive public displays are finding their way into public spaces, shop windows, and public institutions. They are equipped with a multitude of sensors as well as (multi-) touch surfaces allowing not only the audience to be sensed, but also their effectiveness to be measur ..."
Abstract
- Add to MetaCart
(Show Context)
After years in the lab, interactive public displays are finding their way into public spaces, shop windows, and public institutions. They are equipped with a multitude of sensors as well as (multi-) touch surfaces allowing not only the audience to be sensed, but also their effectiveness to be measured. The lack of generally accepted design guidelines for public displays and the fact that there are many different objectives (e.g., increasing attention, optimizing interaction times, finding the best interaction technique) make it a challenging task to pick the most suitable evaluation method. Based on a literature survey and our own experiences, this paper provides an overview of study types, paradigms, and methods for evaluation both in the lab and in the real world. Following a discussion of design challenges, we provide a set of guidelines for researchers and practitioners alike to be applied when evaluating public displays.