Results 1 - 10
of
39
ShadowGuides: Visualizations for In-Situ Learning of Multi-Touch and Whole-Hand Gestures
"... We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the user‟s current hand posture as interpreted by the system (feedback) and available postures ..."
Abstract
-
Cited by 37 (4 self)
- Add to MetaCart
(Show Context)
We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the user‟s current hand posture as interpreted by the system (feedback) and available postures and completion paths necessary to finish the gesture (feedforward). Our experiment compared participants learning gestures with ShadowGuides to those learning with video-based instruction. We found that participants learning with ShadowGuides remembered more gestures and expressed significantly higher preference for the help system. Author Keywords Gesture learning, multi-finger, displacement, marking menus. ACM Classification Keywords H.5.2 [Information interfaces and presentation]: User Interfaces. – Input devices and strategies; Graphical user interfaces.
Territorial coordination and workspace awareness in remote tabletop collaboration
- Proceedings of CHI 2009: ACM Conference on Human Factors in Computing Systems, ACM
, 2009
"... There is growing interest in tabletop interfaces that enable remote collaboration by providing shared workspaces. This approach assumes that these remote tabletops afford the same beneficial work practices as co-located tabletop interfaces and traditional tables. This assumption has not been tested ..."
Abstract
-
Cited by 16 (0 self)
- Add to MetaCart
There is growing interest in tabletop interfaces that enable remote collaboration by providing shared workspaces. This approach assumes that these remote tabletops afford the same beneficial work practices as co-located tabletop interfaces and traditional tables. This assumption has not been tested in practice. We explore two such work practices in remote tabletop collaboration: (a) coordination by territorial partitioning of space; and (b) transitioning between individual and group work within a shared task. We have evaluated co-located and remote tabletop collaboration. We found that remote collaborators did not coordinate territorially as co-located collaborators did. We found no differences between remote and co-located interfaces in their ability to afford individual and group work. However, certain interaction techniques impaired the ability to transition fluidly between these working styles. We discuss causes and the implications for the design and future study of these interfaces. Author Keywords Remote tabletop interfaces, territoriality, coupling, fluidity.
ShadowPuppets: Supporting Collocated Interaction with Mobile Projector Phones Using Hand Shadows
"... Pico projectors attached to mobile phones allow users to view phone content using a large display. However, to provide input to projector phones, users have to look at the device, diverting their attention from the projected image. Additionally, other collocated users have no way of interacting with ..."
Abstract
-
Cited by 15 (3 self)
- Add to MetaCart
(Show Context)
Pico projectors attached to mobile phones allow users to view phone content using a large display. However, to provide input to projector phones, users have to look at the device, diverting their attention from the projected image. Additionally, other collocated users have no way of interacting with the device. We present ShadowPuppets, a system that supports collocated interaction with mobile projector phones. Shadow-Puppets allows users to cast hand shadows as input to mobile projector phones. Most people understand how to cast hand shadows, which provide an easy input modality. Additionally, they implicitly support collocated usage, as nearby users can cast shadows as input and one user can see and understand another user’s hand shadows. We describe the results of three user studies. The first study examines what hand shadows users expect will cause various effects. The second study looks at how users perceive hand shadows, examining what effects they think various hand shadows will cause. Finally, we present qualitative results from a study with our functional prototype and discuss design implications for systems using shadows as input. Our findings suggest that shadow input can provide a natural and intuitive way of interacting with projected interfaces and can support collocated collaboration. Author Keywords Projector-camera system, mobile projector phone, shadow,
Digitable: An interactive multiuser table for collocated and remote collaboration enabling remote gesture visualization
- In Proc. of TABLETOP ’07
"... gesture visualization ..."
(Show Context)
PlayTogether: Playing Games across Multiple Interactive Tabletops IUI’07 workshop on Tangible Play
"... Playing games together can be surprisingly difficult – people have limited free time and are tending to live live farther away from friends and family. We introduce PlayTogether, a system that lets people play typical (and as-yet-unimagined) board games together even when they are far away from each ..."
Abstract
-
Cited by 10 (1 self)
- Add to MetaCart
(Show Context)
Playing games together can be surprisingly difficult – people have limited free time and are tending to live live farther away from friends and family. We introduce PlayTogether, a system that lets people play typical (and as-yet-unimagined) board games together even when they are far away from each other. We have adapted the PlayAnywhere tabletop system so that multiple remotely located people can engage in game-play. PlayTogether enhances the play experience by exchanging carefully composited video of remote players ’ hands and real game pieces. The video that is transmitted mimics a player’s viewpoint via careful camera location. Because PlayTogether’s camera senses in the infrared, it is easy to distinguish between objects in the camera’s view and projected imagery. These capabilities create an interesting and engaging way to blend the virtual and real in multi-player gaming.
R.: Exploring true multiuser multimodal interaction over a digital table
- Proc. DIS '08
, 2008
"... True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design space through a case study, where we implemented an application that supports the KJ creativity method as used by industrial ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design space through a case study, where we implemented an application that supports the KJ creativity method as used by industrial designers. Four key design issues emerged that have a significant impact on how people would use such a multi-user multimodal system. First, parallel work is affected by the design of multimodal commands. Second, individual mode switches can be confusing to collaborators, especially if speech commands are used. Third, establishing personal and group territories can hinder particular tasks that require artefact neutrality. Finally, timing needs to be considered when designing joint multimodal commands. We also describe our model view controller architecture for true multi-user multimodal interaction.
Hugin: A framework for awareness and coordination in mixed-presence collaborative information visualization
- in Proceedings of the ACM Conference on Interactive Tabletops and Surfaces, 2010
"... Analysts are increasingly encountering datasets that are larger and more complex than ever before. Effectively exploring such datasets requires collaboration between multiple analysts, who more often than not are distributed in time or in space. Mixed-presence groupware provide a shared workspace me ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
Analysts are increasingly encountering datasets that are larger and more complex than ever before. Effectively exploring such datasets requires collaboration between multiple analysts, who more often than not are distributed in time or in space. Mixed-presence groupware provide a shared workspace medium that supports this combination of colocated and distributed collaboration. However, collaborative visualization systems for such distributed settings have their own cost and are still uncommon in the visualization community. We present Hugin, a novel layer-based graphical framework for this kind of mixed-presence synchronous collaborative visualization over digital tabletop displays. The design of the framework focuses on issues like awareness and access control, while using information visualization for the collaborative data exploration on network-connected tabletops. To validate the usefulness of the framework, we also present examples of how Hugin can be used to implement new visualizations supporitng these collaborative mechanisms. General terms: Design, Human Factors. ACM Classification: H5.2 [Information interfaces and presentation]: User Interfaces—Graphical user interfaces (GUI)
Characterizing Deixis over Surfaces to Improve Remote Embodiments
"... Abstract. Deictic gestures are ubiquitous when people work over tables and whiteboards, but when collaboration occurs across distributed surfaces, the embodiments used to represent other members of the group often fail to convey the details of these gestures. Although both gestures and embodiments h ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
(Show Context)
Abstract. Deictic gestures are ubiquitous when people work over tables and whiteboards, but when collaboration occurs across distributed surfaces, the embodiments used to represent other members of the group often fail to convey the details of these gestures. Although both gestures and embodiments have been well studied, there is still little information available to groupware designers about what components and characteristics of deictic gesture are most important for conveying meaning through remote embodiments. To provide this information, we conducted three observational studies in which we recorded and analysed more than 450 deictic gestures. We considered four issues that are important for the design of embodiments on surfaces: what parts of the body are used to produce a deictic gesture, what atomic movements make up deixis, where gestures occur in the space above the surface, and what other characteristics deictic gestures exhibit in addition to pointing. Our observations provide a new design understanding of deictic gestures. We use our results to identify the limitations of current embodiment techniques in supporting deixis, and to propose new hybrid designs that can better represent the range of behavior seen in real-world settings.
An Interactive Whiteboard for Immersive Telecollaboration
- In The Visual Computer 2011
"... Abstract In this paper, we present CollaBoard, a col-laboration system that gives a higher feeling of presence to the local auditory and to the persons on the remote site. By overlaying the remote life-sized video image atop the shared artifacts on the common whiteboard and by keeping the whiteboard ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Abstract In this paper, we present CollaBoard, a col-laboration system that gives a higher feeling of presence to the local auditory and to the persons on the remote site. By overlaying the remote life-sized video image atop the shared artifacts on the common whiteboard and by keeping the whiteboard’s content editable at both sites, it creates a higher involvement of the remote partners into a collaborative teamwork. All deictic ges-tures of the remote user are shown in the right context with the shared artifacts on the common whiteboard and thus preserve their meaning. The paper describes the hardware setup, as well as the software implementa-tion and the performed user studies with two identical interconnected systems.
KinectArms: a Toolkit for Capturing and Displaying Arm Embodiments in Distributed Tabletop Groupware
"... Gestures are a ubiquitous part of human communication over tables, but when tables are distributed, gestures become difficult to capture and represent. There are several problems: extracting arm images from video, representing the height of the gesture, and making the arm embodiment visible and unde ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Gestures are a ubiquitous part of human communication over tables, but when tables are distributed, gestures become difficult to capture and represent. There are several problems: extracting arm images from video, representing the height of the gesture, and making the arm embodiment visible and understandable at the remote table. Current solutions to these problems are often expensive, complex to use, and difficult to set up. We have developed a new toolkit – KinectArms – that quickly and easily captures and displays arm embodiments. KinectArms uses a depth camera to segment the video and determine gesture height, and provides several visual effects for representing arms, showing gesture height, and enhancing visibility. KinectArms lets designers add rich arm embodiments to their systems without undue cost or development effort, greatly improving the expressiveness and usability of distributed tabletop groupware.