Results 1 - 10
of
25
R.: Going Beyond the Surface: Studying Multi-Layer Interaction Above the Tabletop
- In Proc. CHI 2012, ACM Press
, 2012
"... Lightweight spatially aware displays (Tangible Magic Lenses) are an effective approach for exploring complex information spaces within a tabletop environment. One way of using the 3D space above a horizontal surface is to di-vide it into discrete parallel layers stacked upon each other. Horizontal a ..."
Abstract
-
Cited by 15 (5 self)
- Add to MetaCart
(Show Context)
Lightweight spatially aware displays (Tangible Magic Lenses) are an effective approach for exploring complex information spaces within a tabletop environment. One way of using the 3D space above a horizontal surface is to di-vide it into discrete parallel layers stacked upon each other. Horizontal and vertical lens movements are essential tasks for the style of multi-layer interaction associated with it. We conducted a comprehensive user study with 18 partici-pants investigating fundamental issues such as optimal number of layers and their thickness, movement and hold-ing accuracies, and physical boundaries of the interaction volume. Findings include a rather limited overall interaction height (44 cm), a different minimal layer thickness for ver-tical and horizontal search tasks (1 cm/4 cm), a reasonable maximum number of layers depending on the primary task, and a convenience zone in the middle for horizontal search. Derived from that, design guidelines are also presented.
Embodied lenses for collaborative visual queries on tabletop displays
- Information Visualization
, 2012
"... We introduce embodied lenses for visual queries on tabletop surfaces using physical interaction. The lenses are simply thin sheets of paper or transparent foil decorated with fiducial markers, allowing them to be tracked by a diffuse illumination tabletop display. The physical affordance of these em ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
We introduce embodied lenses for visual queries on tabletop surfaces using physical interaction. The lenses are simply thin sheets of paper or transparent foil decorated with fiducial markers, allowing them to be tracked by a diffuse illumination tabletop display. The physical affordance of these embodied lenses allow them to be overlapped, causing composition in the underlying virtual space. We perform a formative evaluation to study users ’ conceptual models for overlapping physical lenses. This is followed by a quantitative user study comparing performance for embodied versus purely virtual lenses. Results show that embodied lenses are equally efficient compared to purely virtual lenses, and also support tactile and eyes-free interaction. We then present several examples of the technique, including image layers, map layers, image manipulation, and multidimensional data visualization. The technique is simple, cheap, and can be integrated into many existing tabletop displays.
Evaluation of depth perception for touch interaction with stereoscopic rendered objects
- In Proc. of ACM ITS
, 2012
"... Recent developments in the domain of Human Computer Interaction have suggested to combine stereoscopic visualization with touch interaction. Although this combination has the potential to provide more intuitive and natural interaction setups for a wide range of applications, until now interaction wi ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Recent developments in the domain of Human Computer Interaction have suggested to combine stereoscopic visualization with touch interaction. Although this combination has the potential to provide more intuitive and natural interaction setups for a wide range of applications, until now interaction with such systems is mainly constrained to simple navigation, whereas manipulation of the stereoscopically displayed objects is supported only rather rudimentarily. In this paper we investigate the users ’ ability to discriminate depth or depth motion of stereoscopically rendered objects while she is performing touch gestures and discuss implications for object selection and manipulation. Our results show that there is a usable range of imperceptible manipulation, which- if properly applied- could support interaction with objects floating in the vicinity around the display surface without noticeable impact on a user’s visual or touch performance. ACM Classification: H.5.2 [Information interfaces and presentation]:
LightBeam: Interacting with Augmented Real-World Objects in Pico Projections
- In Proc MUM’12
"... Pico projectors have lately been investigated as mobile display and interaction devices. We propose to use them as ‘light beams’: Everyday objects sojourning in a beam are turned into dedicated projection surfaces and tangible interaction devices. This way, our daily surroundings get populated with ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
Pico projectors have lately been investigated as mobile display and interaction devices. We propose to use them as ‘light beams’: Everyday objects sojourning in a beam are turned into dedicated projection surfaces and tangible interaction devices. This way, our daily surroundings get populated with interactive objects, each one temporarily chartered with a dedicated sub-issue of pervasive interaction. While interaction with objects has been studied in larger, immersive projection spaces, the affordances of pico projections are fundamentally different: they have a very small, strictly limited field of projection, and they are mobile. This paper contributes the results of an exploratory field study on how people interact with everyday objects in pico projections in nomadic settings. Based upon these results, we present novel interaction techniques that leverage the limited field of projection and trade-off between digitally augmented and traditional uses of everyday objects. Author Keywords Pico projectors, handheld projectors, mobile devices, augmented
Dynamic Tangible User Interface Palettes
"... Abstract. Graphics editors often suffer from a large number of tool palettes that compete with valuable document space. To address this problem and to bring back physical affordances similar to a painter’s palette, we propose to augment a digital tabletop with spatially tracked handheld displays. Th ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Graphics editors often suffer from a large number of tool palettes that compete with valuable document space. To address this problem and to bring back physical affordances similar to a painter’s palette, we propose to augment a digital tabletop with spatially tracked handheld displays. These displays are dynamically updated depending on their spatial location. We introduce the concept of spatial Work Zones that take up distinct 3D regions above the table surface and serve as physical containers for digital content that is organized as stacks of horizontal layers. Spatial Work Zones are represented either by physical objects or on-screen on the tabletop. Associated layers can be explored fluently by entering a spatial Work Zone with a handheld display. This provides quick access and seamless changes between tools and parts of the document that are instantly functional, i.e., ready to be used by a digital pen. We discuss several use cases illustrating our techniques and setting them into context with previous systems. Early user feedback indicates that combining dynamic GUI functionality with the physicality of spatially tracked handheld displays is promising and can be generalized beyond graphics editing.
Navigation Concepts for ZUIs Using Proxemic Interactions
"... Proxemics in Human-Computer Interaction (HCI) offer new prospects for the design of explicit and implicit interaction techniques to support multiple users and a concurrent navigation for zoomable user interfaces (ZUI). In this paper we describe a navigation concept for zooming and panning based on a ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Proxemics in Human-Computer Interaction (HCI) offer new prospects for the design of explicit and implicit interaction techniques to support multiple users and a concurrent navigation for zoomable user interfaces (ZUI). In this paper we describe a navigation concept for zooming and panning based on a multi-display environment using proxemic relations as input (e.g. a user’s location and orientation in physical space to manipulate a viewport). We hope to foster awareness of the topic and to inspire further discussion and research on proxemics in HCI. Author Keywords Proxemics; Proxemic Interactions; Ubiquitous
tPad: Designing TransparentDisplay Mobile Interactions
- In Proc. DIS ’14. ACM
, 2014
"... As a novel class of mobile devices with rich interaction capabilities we introduce tPads – transparent display tablets. tPads are the result of a systematic design investigation into the ways and benefits of interacting with transparent mobiles which goes beyond traditional mobile interactions and a ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
As a novel class of mobile devices with rich interaction capabilities we introduce tPads – transparent display tablets. tPads are the result of a systematic design investigation into the ways and benefits of interacting with transparent mobiles which goes beyond traditional mobile interactions and augmented reality (AR) applications. Through a user-centered design process we explored interaction techniques for transparent-display mobiles and classified them into four categories: overlay, dual display & input, surface capture and model-based interactions. We investigated the technical feasibility of such interactions by designing and building two touch-enabled semi-transparent tablets called tPads and a range of tPad applications. Further, a user study shows that tPad interactions applied to everyday mobile tasks (application switching and image capture) outperform current mobile interactions and were preferred by users. Our hands-on design process and experimental evaluation demonstrate that transparent displays provide valuable interaction opportunities for mobile devices.
Interacting with printed books using digital pens and smart mobile projection
- In Proc. of the Workshop on Mobile and Personal Projection (MP2) @ ACM CHI '11
"... Even though the number of e-book readers is continually increasing, it can be assumed that books will still be printed for a long time. To solve the challenge of combining the benefits of real books with the power of digital computing, we propose the concept of Projective Augmented Books (PAB). The ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Even though the number of e-book readers is continually increasing, it can be assumed that books will still be printed for a long time. To solve the challenge of combining the benefits of real books with the power of digital computing, we propose the concept of Projective Augmented Books (PAB). The envisioned device works like a reading lamp which can be attached to a printed book. An integrated pico projector displays aligned information onto the book pages. To support active reading tasks, digital paper and pen technology is used for a variety of marking and annotations styles. Dictionary lookup, translation, and copying to a scrapbook are also provided. As early user feedback with our prototype suggests, a truly mobile PAB system could successfully bridge the realm of books and computers. Author Keywords Digital paper and pen, gestures, interaction, mobile
HideOut: Mobile Projector Interaction with Tangible Objects and Surfaces
- In TEI ’13
"... c d ..."
(Show Context)
PaperVideo: Interacting with Videos On Multiple Paper-like Displays
"... Sifting and sense-making of video collections are important tasks in many professions. In contrast to sense-making of paper documents, where physical structuring of many documents has proven to be key to effective work, interaction with video is still restricted to the traditional "one video at ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Sifting and sense-making of video collections are important tasks in many professions. In contrast to sense-making of paper documents, where physical structuring of many documents has proven to be key to effective work, interaction with video is still restricted to the traditional "one video at a time " paradigm. This paper investigates how interaction with video can benefit from paper-like displays that allow for working with multiple videos simultaneously in physical space. We present a corresponding approach and system called PaperVideo, including novel interaction concepts for both video and audio. These include spatial techniques for temporal navigation, arranging, grouping and linking of videos, as well as for managing video contents and simultaneous audio playback on multiple displays. An evaluation with users provides insights into how paper-based navigation with videos improves active video work.