Results 1 - 10
of
27
Skinput: Appropriating the Body as an Input Surface
"... We present Skinput, a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as an input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect thes ..."
Abstract
-
Cited by 112 (10 self)
- Add to MetaCart
(Show Context)
We present Skinput, a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as an input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always available, naturally portable, and on-body finger input system. We assess the capabilities, accuracy and limitations of our technique through a two-part, twenty-participant user study. To further illustrate the utility of our approach, we conclude with several proof-of-concept applications we developed. Author Keywords Bio-acoustics, finger input, buttons, gestures, on-body interaction,
OmniTouch: wearable multitouch interaction everywhere
- In Proc. ACM UIST ’11
, 2011
"... Figure 1. OmniTouch is a wearable depth-sensing and projection system that allows everyday surfaces- including a wearer’s own body- to be appropriated for graphical multitouch interaction. OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on ..."
Abstract
-
Cited by 86 (11 self)
- Add to MetaCart
(Show Context)
Figure 1. OmniTouch is a wearable depth-sensing and projection system that allows everyday surfaces- including a wearer’s own body- to be appropriated for graphical multitouch interaction. OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces. Beyond the shoulder-worn system, there is no instrumentation of the user or environment. Foremost, the system allows the wearer to use their hands, arms and legs as graphical, interactive surfaces. Users can also transiently appropriate surfaces from the environment to expand the interactive area (e.g., books, walls, tables). On such surfaces- without any calibration- OmniTouch provides capabilities similar to that of a mouse or touchscreen: X and Y location in 2D interfaces and whether fingers are “clicked” or hovering, enabling a wide variety of interactions. Reliable operation on the hands, for example, requires buttons to be 2.3cm in diameter. Thus, it is now conceivable that anything one can do on today’s mobile devices, they could do in the palm of their hand. ACM Classification: H.5.2 [Information interfaces and
Handy AR: Markerless Inspection of Augmented Reality Objects Using Fingertip Tracking
"... We present markerless camera tracking and user interface methodology for readily inspecting augmented reality (AR) objects in wearable computing applications. Instead of a marker, we use the human hand as a distinctive pattern that almost all wearable computer users have readily available. We presen ..."
Abstract
-
Cited by 22 (1 self)
- Add to MetaCart
(Show Context)
We present markerless camera tracking and user interface methodology for readily inspecting augmented reality (AR) objects in wearable computing applications. Instead of a marker, we use the human hand as a distinctive pattern that almost all wearable computer users have readily available. We present a robust real-time algorithm that recognizes fingertips to reconstruct the 6DOF camera pose relative to the user’s outstretched hand. A hand pose model is constructed in a one-time calibration step by measuring the fingertip positions in presence of ground-truth scale information. Through frame-by-frame reconstruction of the camera pose relative to the hand, we can stabilize 3D graphics annotations on top of the hand, allowing the user to inspect such virtual objects conveniently from different viewing angles in AR. We evaluate our approach with regard to speed and accuracy, and compare it to state-of-the-art marker-based AR systems. We demonstrate the robustness and usefulness of our approach in an example AR application for selecting and inspecting world-stabilized virtual objects. 1.
On-body interaction: armed and dangerous
- Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction - TEI ’12
, 2012
"... Recent technological advances in input sensing, as well as ultra-small projectors, have opened up new opportunities for interaction – the use of the body itself as both an input and output platform. Such on-body interfaces offer new interac-tive possibilities, and the promise of access to computatio ..."
Abstract
-
Cited by 13 (0 self)
- Add to MetaCart
(Show Context)
Recent technological advances in input sensing, as well as ultra-small projectors, have opened up new opportunities for interaction – the use of the body itself as both an input and output platform. Such on-body interfaces offer new interac-tive possibilities, and the promise of access to computation, communication and information literally in the palm of our hands. The unique context of on-body interaction allows us to take advantage of extra dimensions of input our bodies naturally afford us. In this paper, we consider how the arms and hands can be used to enhance on-body interactions, which is typically finger input centric. To explore this op-portunity, we developed Armura, a novel interactive on-body system, supporting both input and graphical output. Using this platform as a vehicle for exploration, we proto-typed many applications and interactions. This helped to confirm chief use modalities, identify fruitful interaction approaches, and in general, better understand how interfaces operate on the body. We highlight the most compelling techniques we uncovered. Further, this paper is the first to consider and prototype how conventional interaction issues, such as cursor control and clutching, apply to the on-body domain. Finally, we bring to light several new and unique interaction techniques. ACM Classification: H.5.2 [Information interfaces and
Multitouchless: Real-time fingertip detection and tracking using geodesic maxima
- In 10th IEEE Int. Conf on Face and Gesture Recognition FG2013
, 2013
"... Abstract-Since the advent of multitouch screens users have been able to interact using fingertip gestures in a two dimensional plane. With the development of depth cameras, such as the Kinect, attempts have been made to reproduce the detection of gestures for three dimensional interaction. Many of ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
Abstract-Since the advent of multitouch screens users have been able to interact using fingertip gestures in a two dimensional plane. With the development of depth cameras, such as the Kinect, attempts have been made to reproduce the detection of gestures for three dimensional interaction. Many of these use contour analysis to find the fingertips, however the success of such approaches is limited due to sensor noise and rapid movements. This paper discusses an approach to identify fingertips during rapid movement at varying depths allowing multitouch without contact with a screen. To achieve this, we use a weighted graph that is built using the depth information of the hand to determine the geodesic maxima of the surface. Fingertips are then selected from these maxima using a simplified model of the hand and correspondence found over successive frames. Our experiments show real-time performance for multiple users providing tracking at 30fps for up to 4 hands and we compare our results with stateof-the-art methods, providing accuracy an order of magnitude better than existing approaches.
Binocular Hand Tracking and Reconstruction Based on 2D Shape Matching
, 2006
"... This paper presents a method for real-time 3D hand tracking in images acquired by a calibrated, possibly moving stereoscopic rig. The proposed method consists of a collection of techniques that enable the modeling and detection of hands, their temporal association in image sequences, the establishme ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
(Show Context)
This paper presents a method for real-time 3D hand tracking in images acquired by a calibrated, possibly moving stereoscopic rig. The proposed method consists of a collection of techniques that enable the modeling and detection of hands, their temporal association in image sequences, the establishment of hand correspondences between stereo images and the 3D reconstruction of their contours. Building upon our previous research on color-based, 2D skin-color tracking, the 3D hand tracker is developed through the coupling of the results of two 2D skin-color trackers that run independently on the two video streams acquired by a stereoscopic system. The proposed method runs in real time on a conventional Pentium 4 processor when operating on 320x240 images. Representative experimental results are also presented. 1.
Propagation of Pixel Hypotheses for Multiple Objects Tracking
"... Abstract. In this paper we propose a new approach for tracking multiple objects in image sequences. The proposed approach differs from existing ones in important aspects of the representation of the location and the shape of tracked objects and of the uncertainty associated with them. The location a ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
Abstract. In this paper we propose a new approach for tracking multiple objects in image sequences. The proposed approach differs from existing ones in important aspects of the representation of the location and the shape of tracked objects and of the uncertainty associated with them. The location and the speed of each object is modeled as a discrete time, linear dynamical system which is tracked using Kalman filtering. Information about the spatial distribution of the pixels of each tracked object is passed on from frame to frame by propagating a set of pixel hypotheses, uniformly sampled from the original object’s projection to the target frame using the object’s current dynamics, as estimated by the Kalman filter. The density of the propagated pixel hypotheses provides a novel metric that is used to associate image pixels with existing object tracks by taking into account both the shape of each object and the uncertainty associated with its track. The proposed tracking approach has been developed to support face and hand tracking for human-robot interaction. Nevertheless, it is readily applicable to a much broader class of multiple objects tracking problems. 1
doi:10.1145/1978542.1978564 Skinput: Appropriating the Skin as an Interactive Canvas
"... Skinput is a technology that appropriates the skin as an input surface by analyzing mechanical vibrations that propagate through the body. Specifically, we resolve the location of finger taps on the arm and hand using a novel sensor array, worn as an armband. This approach provides an onbody finger ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
Skinput is a technology that appropriates the skin as an input surface by analyzing mechanical vibrations that propagate through the body. Specifically, we resolve the location of finger taps on the arm and hand using a novel sensor array, worn as an armband. This approach provides an onbody finger input system that is always available, naturally portable, and minimally invasive. When coupled with a pico-projector, a fully interactive graphical interface can be rendered directly on the body. To view video of Skinput, visit
SurroundSee: enabling peripheral vision on smartphones during active use
- In Proc. UIST 2013
"... Mobile devices are endowed with significant sensing capabilities. However, their ability to ‘see ’ their surroundings, during active use, is limited. We present Surround-See, a self-contained smartphone equipped with an omni-directional camera that enables peripheral vision around the device to augm ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Mobile devices are endowed with significant sensing capabilities. However, their ability to ‘see ’ their surroundings, during active use, is limited. We present Surround-See, a self-contained smartphone equipped with an omni-directional camera that enables peripheral vision around the device to augment daily mobile tasks. Surround-See provides mobile devices with a field-of-view collinear to the device screen. This capability facilitates novel mobile tasks such as, pointing at objects in the environment to interact with content, operating the mobile device at a physical distance and allowing the device to detect user activity, even when the user is not holding it. We describe Surround-See’s architecture, and demonstrate applications that exploit peripheral ‘seeing ’ capabilities during active use of a mobile device. Users confirm the value of embedding peripheral vision capabilities on mobile devices and offer insights for novel usage methods. Author Keywords Peripheral mobile vision, mobile ‘seeing’, mobile surround
Partical filter-based fingertips tracking with circular hough transform feature
- In: Proceedings of the 12th IAPR Conference on Machine Vision Application
, 2011
"... Abstract In this work, we present a fingertip tracking framework which allows observation of finger movements in task space. By applying a multi-scale edge extraction technique, an edge map is generated in which low contrast edges are preserved while noise is suppressed. Based on circular image fea ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract In this work, we present a fingertip tracking framework which allows observation of finger movements in task space. By applying a multi-scale edge extraction technique, an edge map is generated in which low contrast edges are preserved while noise is suppressed. Based on circular image features, determined from the map using Hough transform, the fingertips are accurately tracked by combining a particle filter and a subsequent mean-shift procedure. To increase the robustness of the proposed method, dynamical motion models are trained for the prediction of the finger displacements. Experiments were conducted on various image sequences from which statements on the performance of the framework can be derived.