Results 1 - 10
of
52
Sensetable: A Wireless Object Tracking Platform for Tangible User Interfaces
"... In this paper we present a system that electromagnetically tracks the positions and orientations of multiple wireless objects on a tabletop display surface. The system offers two types of improvements over existing tracking approaches such as computer vision. First, the system tracks objects quickly ..."
Abstract
-
Cited by 151 (12 self)
- Add to MetaCart
In this paper we present a system that electromagnetically tracks the positions and orientations of multiple wireless objects on a tabletop display surface. The system offers two types of improvements over existing tracking approaches such as computer vision. First, the system tracks objects quickly and accurately without susceptibility to occlusion or changes in lighting conditions. Second, the tracked objects have state that can be modified by attaching physical dials and modifiers. The system can detect these changes in realtime. We present several new interaction techniques developed in the context of this system. Finally, we present two applications of the system: chemistry and system dynamics simulation. Keywords Tangible user interface, interactive surface, object tracking, two-handed manipulation, system dynamics, augmented reality
Combining multiple depth cameras and projectors for interactions on, above and between surfaces
- In Proc. UIST
"... Figure 1: LightSpace prototype combines depth cameras and projectors to provide interactivity on and between surfaces in everyday environments. LightSpace interactions include through-body object transitions between existing interactive surfaces (a-b) and interactions with an object in hand (c-d). I ..."
Abstract
-
Cited by 79 (6 self)
- Add to MetaCart
(Show Context)
Figure 1: LightSpace prototype combines depth cameras and projectors to provide interactivity on and between surfaces in everyday environments. LightSpace interactions include through-body object transitions between existing interactive surfaces (a-b) and interactions with an object in hand (c-d). Images (a) and (c) show real images of the user experience, while images (b) and (d) show the virtual 3D mesh representation used to reason about users and interactions in space. Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may “pick up ” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop ” the object onto the wall by touching it with their other hand. We detail the interactions and algorithms unique to LightSpace, discuss some initial observations of use and suggest future directions. ACM Classification: H5.2 [Information interfaces and
RFIG lamps: Interacting with a self-describing world via photosensing wireless tags and projectors
- ACM Transactions on Graphics (TOG
"... This paper describes how to instrument the physical world so that objects become self-describing, communicating their identity, geometry, and other information such as history or user annotation. The enabling technology is a wireless tag which acts as a radio frequency identity and geometry (RFIG) t ..."
Abstract
-
Cited by 73 (11 self)
- Add to MetaCart
This paper describes how to instrument the physical world so that objects become self-describing, communicating their identity, geometry, and other information such as history or user annotation. The enabling technology is a wireless tag which acts as a radio frequency identity and geometry (RFIG) transponder. We show how addition of a photo-sensor to a wireless tag significantly extends its functionality to allow geometric operations- such as finding the 3D position of a tag, or detecting change in the shape of a tagged object. Tag data is presented to the user by direct projection using a handheld locale-aware mobile projector. We introduce a novel technique that we call interactive projection to allow a user to interact with projected information e.g. to navigate or update the projected information. The ideas are demonstrated using objects with active radio frequency (RF) tags. But the work was motivated by the advent of unpowered passive-RFID, a technology that promises to have significant impact in real-world applications. We discuss how our current prototypes could evolve to passive-RFID in the future.
Tangible products: Redressing the balance between appearance and action
- Personal and Ubiquitous Computing
, 2004
"... Abstract Over the past decade, our group has approached interaction design from an industrial design point of view. In 1 doing so, we focus on a branch of design called formgiving. Traditionally, formgiving has been concerned with such aspects of objects as form, colour, texture and material. In the ..."
Abstract
-
Cited by 56 (2 self)
- Add to MetaCart
(Show Context)
Abstract Over the past decade, our group has approached interaction design from an industrial design point of view. In 1 doing so, we focus on a branch of design called formgiving. Traditionally, formgiving has been concerned with such aspects of objects as form, colour, texture and material. In the context of interaction design, we have come to see formgiving as the way in which objects appeal to our senses and motor skills. In this paper we first describe our approach to interaction design of electronic products. We start with how we have been first inspired and then disappointed by the Gibsonian perception movement [1], how we have come to see both appearance and actions as carriers of meaning, and how we see usability and aesthetics as inextricably linked. We then show a number of interaction concepts for consumer electronics with both our initial thinking and what we learnt from them. Finally, we discuss the relevance of all this for tangible interaction. We argue that in addition to a data-centred view it is also possible to take a perceptual-motor centred view on tangible interaction. In this view it is the rich opportunities for differentiation in appearance and action possibilities that make physical objects open up new avenues to meaning and aesthetics in interaction design. Keywords tangible interaction, industrial design, ecological psychology, semantics 1. Whilst formgiving is somewhat of a neologism in English, many other European languages do have a separate word for form-related design,
Dynamic Shader Lamps: Painting on Movable Objects
- In Proceedings of Int. Symp. On Augmented Reality
"... We present a Dynamic Spatially Augmented Reality system for augmenting movable 3D objects in an indoor environment using multiple projectors. We describe a real-time system for applying virtual paint and textures to real objects simply by direct physical manipulation of the object and a "paint ..."
Abstract
-
Cited by 56 (4 self)
- Add to MetaCart
(Show Context)
We present a Dynamic Spatially Augmented Reality system for augmenting movable 3D objects in an indoor environment using multiple projectors. We describe a real-time system for applying virtual paint and textures to real objects simply by direct physical manipulation of the object and a "paint brush" stylus. We track the objects and the "paintbrush", and illuminate the objects with images that remain registered as they move, to create the illusion of material properties. The system is simple to use and we hope it may herald new applications in diverse fields such as visualization, tele-immersion, art and architecture. The system currently works with tracked objects whose geometry was pre-acquired and models created manually, but it is possible to extend it, by adding cameras to the environment, to acquire object geometry automatically and use vision-based tracking for the object and paintbrush.
A projector-camera system with real-time photometric adaptation for dynamic environments.
, 2005
"... Abstract Projection systems can be used to implement augmented reality, as well as to create both displays and interfaces on ordinary surfaces. Ordinary surfaces have varying reflectance, color, and geometry. These variations can be accounted for by integrating a camera into the projection system a ..."
Abstract
-
Cited by 56 (2 self)
- Add to MetaCart
(Show Context)
Abstract Projection systems can be used to implement augmented reality, as well as to create both displays and interfaces on ordinary surfaces. Ordinary surfaces have varying reflectance, color, and geometry. These variations can be accounted for by integrating a camera into the projection system and applying methods from computer vision. The methods currently applied are fundamentally limited since they assume the camera, projector, and scene are static. In this paper, we describe a technique for photometrically adaptive projection that makes it possible to handle a dynamic environment. We begin by presenting a co-axial projector-camera system whose geometric correspondence is independent of changes in the environment. To handle photometric changes, our method uses the errors between the desired and measured appearance of the projected image. A key novel aspect of our algorithm is that we combine a physicsbased model with dynamic feedback to achieve real time adaptation to the changing environment. We verify our algorithm through a wide variety of experiments. We show that it is accurate and runs in real-time. Our algorithm can be applied broadly to assist HCI, visualization, shape recovery, and entertainment applications. Camera Assisted Projection The recent availability of cheap, small, and bright projectors has made it practical to use them for a wide range of applications such as creating large seamless displays All previous work that takes into account both geometric and photometric properties of projection has assumed both a static scene and projection system. The assumption that the scene and system remain static is very restrictive. This is especially true when we consider recently presented applications that require hand-held or mobile projection systems such as iLamps and RFIG Lamps One approach that was proposed to handle photometric changes is direct dynamic feedback from the camera for each pixel In this work, we present a novel hybrid method which combines a model based approach with dynamic feedback
Augmented Urban Planning Workbench: Overlaying Drawings, Physical Models and Digital Simulation
, 2002
"... There is a problem in the spatial and temporal separation between the varying forms of representation used in urban design. Sketches, physical models, and more recently computational simulation, while each serving a useful purpose, tend to be incompatible forms of representation. The contemporary d ..."
Abstract
-
Cited by 44 (1 self)
- Add to MetaCart
There is a problem in the spatial and temporal separation between the varying forms of representation used in urban design. Sketches, physical models, and more recently computational simulation, while each serving a useful purpose, tend to be incompatible forms of representation. The contemporary designer is required assimilate these divergent media into a single mental construct and in so doing is distracted from the central process of design.
A comparison of spatial organization strategies in graphical and tangible user interfaces
- In Proceedings of DARE 2000 on Designing Augmented Reality Environments
, 2000
"... We present a study comparing how people use space in a Tangible User Interface (TUI) and in a Graphical User Interface (GUI). We asked subjects to read ten summaries of recent news articles and to think about the relationships between them. In our TUI condition, we bound each of the summaries to one ..."
Abstract
-
Cited by 40 (0 self)
- Add to MetaCart
(Show Context)
We present a study comparing how people use space in a Tangible User Interface (TUI) and in a Graphical User Interface (GUI). We asked subjects to read ten summaries of recent news articles and to think about the relationships between them. In our TUI condition, we bound each of the summaries to one of ten visually identical wooden blocks. In our GUI condition, each summary was represented by an icon on the screen. We asked subjects to indicate the location of each summary by pointing to the corresponding icon or wooden block. Afterward, we interviewed them about the strategies they used to position the blocks or icons during the task. We observed that TUI subjects performed better at the location recall task than GUI subjects. In addition, some TUI subjects used the spatial relationship between specific blocks and parts of the environment to help them remember the content of those blocks, while GUI subjects did not do this. Those TUI subjects who reported encoding information using this strategy tended to perform better at the recall task than those who did not.
MirageTable: freehand interaction on a projected augmented reality tabletop
- Proc. of CHI’12
"... Figure 1. MirageTable is a curved projection-based augmented reality system (A), which digitizes any object on the surface (B), presenting correct perspective views accounting for real objects (C) and supporting freehand physics-based interactions (D). Instrumented with a single depth camera, a ster ..."
Abstract
-
Cited by 34 (7 self)
- Add to MetaCart
(Show Context)
Figure 1. MirageTable is a curved projection-based augmented reality system (A), which digitizes any object on the surface (B), presenting correct perspective views accounting for real objects (C) and supporting freehand physics-based interactions (D). Instrumented with a single depth camera, a stereoscopic projector, and a curved screen, MirageTable is an interactive system designed to merge real and virtual worlds into a single spatially registered experience on top of a table. Our depth camera tracks the user’s eyes and performs a real-time capture of both the shape and the appearance of any object placed in front of the camera (including user’s body and hands). This real-time capture enables perspective stereoscopic 3D visualizations to a single user that account for deformations caused by physical objects on the table. In addition, the user can interact with virtual objects through physically-realistic freehand actions without any gloves,
Using a Steerable Projector and a Camera to Transform Surfaces into Interactive Displays
- In CHI '01 Extended Abstracts on Human Factors in Computing Systems
, 2001
"... The multi-surface interactive display projector (MSIDP) is a steerable projection system that transforms non-tethered surfaces into interactive displays. In an MSIDP, the display image is directed onto a surface by a rotating mirror. Oblique projection distortions are removed by a computer-graphics ..."
Abstract
-
Cited by 25 (1 self)
- Add to MetaCart
(Show Context)
The multi-surface interactive display projector (MSIDP) is a steerable projection system that transforms non-tethered surfaces into interactive displays. In an MSIDP, the display image is directed onto a surface by a rotating mirror. Oblique projection distortions are removed by a computer-graphics reverse-distortion process and user interaction (pointing and clicking) is achieved by detecting hand movements with a video camera. The MSIDP is a generic input/output device to be used in applications that require computer access from different locations of a space or computer action in the real world (such as locating objects). In particular, it can also be used to provide computer access in public spaces and to people with locomotive disabilities.