Results 1 - 10
of
169
The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms
, 2002
"... The interactive workspaces project explores new possibilities for people working together in technology-rich spaces. The project focuses on augmenting a dedicated meeting space with large displays, wireless or multimodal devices, and seamless mobile appliance integration. ..."
Abstract
-
Cited by 376 (12 self)
- Add to MetaCart
(Show Context)
The interactive workspaces project explores new possibilities for people working together in technology-rich spaces. The project focuses on augmenting a dedicated meeting space with large displays, wireless or multimodal devices, and seamless mobile appliance integration.
Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes
"... Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some us ..."
Abstract
-
Cited by 206 (16 self)
- Add to MetaCart
(Show Context)
Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a “$1 recognizer ” that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97 % accuracy with only 1 loaded template and 99 % accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers ’ N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing. ACM Categories & Subject Descriptors: H5.2. [Information interfaces and presentation]: User interfaces – Input devices and strategies. I5.2. [Pattern recognition]: Design methodology – Classifier design and evaluation. I5.5. [Pattern recognition]: Implementation – Interactive systems.
Toward characterizing the productivity benefits of very large displays
- Proc. Interact
, 2003
"... Abstract: Larger display surfaces are becoming increasingly available due to multi-monitor capability built into many systems, in addition to the rapid decrease in their costs. However, little is known about the performance benefits of using these larger surfaces compared to traditional single-monit ..."
Abstract
-
Cited by 104 (5 self)
- Add to MetaCart
Abstract: Larger display surfaces are becoming increasingly available due to multi-monitor capability built into many systems, in addition to the rapid decrease in their costs. However, little is known about the performance benefits of using these larger surfaces compared to traditional single-monitor displays. In addition, it is not clear that current software designs and interaction techniques have been properly tuned for these larger surfaces. A preliminary user study was carried out to provide some initial evidence about the benefits of large versus small display surfaces for complex, multi-application office work. Significant benefits were observed in the use of a prototype, larger display, in addition to significant positive user preference and satisfaction with its use over a small display. In addition, design guidelines for enhancing user interaction across large display surfaces were identified. User productivity could be significantly enhanced in future graphical user interface designs if developed with these findings in mind.
Keeping Things in Context: A Comparative Evaluation of Focus Plus Context Screens, Overviews, and Zooming
, 2002
"... Users working with documents that are too large and detailed to fit on the user's screen (e.g. chip designs) have the choice between zooming or applying appropriate visualization techniques. In this paper, we present a comparison of three such techniques. The first, focus plus context screens, ..."
Abstract
-
Cited by 100 (7 self)
- Add to MetaCart
(Show Context)
Users working with documents that are too large and detailed to fit on the user's screen (e.g. chip designs) have the choice between zooming or applying appropriate visualization techniques. In this paper, we present a comparison of three such techniques. The first, focus plus context screens, are wall-size low-resolution displays with an embedded high-resolution display region. This technique is compared with overview plus detail and zooming/panning. We interviewed fourteen visual surveillance and design professionals from different areas (graphic design, chip design, air traffic control, etc.) in order to create a representative sample of tasks to be used in two experimental comparison studies. In the first experiment, subjects using focus plus context screens to extract information from large static documents completed the two experimental tasks on average 21% and 36% faster than when they used the other interfaces. In the second experiment, focus plus context screens allowed subjects to reduce their error rate in a driving simulation to less than one third of the error rate of the competing overview plus detail setup.
A review of overview+detail, zooming, and focus+context interfaces
- ACM COMPUT. SURV
, 2008
"... There are many interface schemes that allow users to work at, and move between, focused and contextual views of a data set. We review and categorise these schemes according to the interface mechanisms used to separate and blend views. The four approaches are overview+detail, which uses a spatial sep ..."
Abstract
-
Cited by 86 (1 self)
- Add to MetaCart
There are many interface schemes that allow users to work at, and move between, focused and contextual views of a data set. We review and categorise these schemes according to the interface mechanisms used to separate and blend views. The four approaches are overview+detail, which uses a spatial separation between focused and contextual views; zooming, which uses a temporal separation; focus+context, which minimizes the seam between views by displaying the focus within the context; and cue-based techniques which selectively highlight or suppress items within the information space. Critical features of these categories, and empirical evidence of their success, are discussed. The aim is to provide a succinct summary of the state-of-the-art, to illuminate successful and unsuccessful interface strategies, and to identify potentially fruitful areas for further work.
Design and analysis of delimiters for selection-action pen gesture phrases in Scriboli
- In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
, 2005
"... We present a quantitative analysis of delimiters for pen gestures. A delimiter is “something different ” in the input stream that a computer can use to determine the structure of input phrases. We study four techniques for delimiting a selection-action gesture phrase consisting of lasso selection pl ..."
Abstract
-
Cited by 81 (9 self)
- Add to MetaCart
(Show Context)
We present a quantitative analysis of delimiters for pen gestures. A delimiter is “something different ” in the input stream that a computer can use to determine the structure of input phrases. We study four techniques for delimiting a selection-action gesture phrase consisting of lasso selection plus marking-menu-based command activation. Pigtail is a new technique that uses a small loop to delimit lasso selection from marking (Fig. 1). Handle adds a box to the end of the lasso, from which the user makes a second stroke for marking. Timeout uses dwelling with the pen to delimit the lasso from the mark. Button uses a button press to signal when to delimit the gesture. We describe the role of delimiters in our Scriboli pen interaction testbed, and show how Pigtail supports scope selection, command activation, and direct manipulation all in a single fluid pen gesture.
The Vacuum: Facilitating the Manipulation of Distant Objects
- CHI 2005
, 2005
"... We present the design and evaluation of the vacuum, a new interaction technique that enables quick access to items on areas of a large display that are difficult for a user to reach without significant physical movement. The vacuum is a circular widget with a user controllable arc of influence that ..."
Abstract
-
Cited by 76 (12 self)
- Add to MetaCart
We present the design and evaluation of the vacuum, a new interaction technique that enables quick access to items on areas of a large display that are difficult for a user to reach without significant physical movement. The vacuum is a circular widget with a user controllable arc of influence that is centered at the widget’s point of invocation and spans out to the edges of the display. Far away objects residing inside this influence arc are brought closer to the widget’s centre in the form of proxies that can be manipulated in lieu of the original. We conducted two experiments which compare the vacuum to direct picking and an existing technique called drag-and-pick [2]. Results show that the vacuum outperforms existing techniques when selecting multiple targets in a sequence, performs similarly to existing techniques when selecting single targets located moderately far away, and slightly worse with single targets located very far away in the presence of distracter targets along the path.
Fluid Interaction Techniques for the Control and Annotation of Digital Video
- UIST ’03 VANCOUVER, BC, CANADA
, 2003
"... We explore a variety of interaction and visualization techniques for fluid navigation, segmentation, linking, and annotation of digital videos. These techniques are developed within a concept prototype called LEAN that is designed for use with pressure-sensitive digitizer tablets. These techniques i ..."
Abstract
-
Cited by 74 (11 self)
- Add to MetaCart
We explore a variety of interaction and visualization techniques for fluid navigation, segmentation, linking, and annotation of digital videos. These techniques are developed within a concept prototype called LEAN that is designed for use with pressure-sensitive digitizer tablets. These techniques include a transient position+velocity widget that allows users not only to move around a point of interest on a video, but also to rewind or fast forward at a controlled variable speed. We also present a new variation of fish-eye views called twist-lens, and incorporate this into a position control slider designed for the effective navigation and viewing of large sequences of video frames. We also explore a new style of widgets that exploit the use of the pen’s pressure-sensing capability, increasing the input vocabulary available to the user. Finally, we elaborate on how annotations referring to objects that are temporal in nature, such as video, may be thought of as links, and fluidly constructed, visualized and navigated.
Tracking menus
- UIST
, 2003
"... We describe a new type of graphical user interface widget, known as a “tracking menu. ” A tracking menu consists of a cluster of graphical buttons, and as with traditional menus, the cursor can be moved within the menu to select and interact with items. However, unlike traditional menus, when the cu ..."
Abstract
-
Cited by 63 (5 self)
- Add to MetaCart
(Show Context)
We describe a new type of graphical user interface widget, known as a “tracking menu. ” A tracking menu consists of a cluster of graphical buttons, and as with traditional menus, the cursor can be moved within the menu to select and interact with items. However, unlike traditional menus, when the cursor hits the edge of the menu, the menu moves to continue tracking the cursor. Thus, the menu always stays under the cursor and close at hand. In this paper we define the behavior of tracking menus, show unique affordances of the widget, present a variety of examples, and discuss design characteristics. We examine one tracking menu design in detail, reporting on usability studies and our experience integrating the technique into a commercial application for the Tablet PC. While user interface issues on the Tablet PC, such as preventing round trips to tool palettes with the pen, inspired tracking menus, the design also works well with a standard mouse and keyboard configuration.
VisionWand: Interaction Techniques for Large Displays using a Passive Wand Tracked in 3D
- UIST ’03 VANCOUVER, BC, CANADA
, 2003
"... A passive wand tracked in 3D using computer vision techniques is explored as a new input mechanism for interacting with large displays. We demonstrate a variety of interaction techniques that exploit the affordances of the wand, resulting in an effective interface for large scale interaction. The la ..."
Abstract
-
Cited by 63 (1 self)
- Add to MetaCart
A passive wand tracked in 3D using computer vision techniques is explored as a new input mechanism for interacting with large displays. We demonstrate a variety of interaction techniques that exploit the affordances of the wand, resulting in an effective interface for large scale interaction. The lack of any buttons or other electronics on the wand presents a challenge that we address by developing a set of postures and gestures to track state and enable command input. We also describe the use of multiple wands, and posit designs for more complex wands in the future.