Results 1 - 10
of
79
Why we tag: motivations for annotation in mobile and online media
- IN CHI ’07: PROCEEDINGS OF THE SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS
, 2007
"... Why do people tag? Users have mostly avoided annotating media such as photos – both in desktop and mobile environments – despite the many potential uses for annotations, including recall and retrieval. We investigate the incentives for annotation in Flickr, a popular web-based photo-sharing system, ..."
Abstract
-
Cited by 243 (6 self)
- Add to MetaCart
Why do people tag? Users have mostly avoided annotating media such as photos – both in desktop and mobile environments – despite the many potential uses for annotations, including recall and retrieval. We investigate the incentives for annotation in Flickr, a popular web-based photo-sharing system, and ZoneTag, a cameraphone photo capture and annotation tool that uploads images to Flickr. In Flickr, annotation (as textual tags) serves both personal and social purposes, increasing incentives for tagging and resulting in a relatively high number of annotations. ZoneTag, in turn, makes it easier to tag cameraphone photos that are uploaded to Flickr by allowing annotation and suggesting relevant tags immediately after capture. A qualitative study of ZoneTag/Flickr users exposed various tagging patterns and emerging motivations for photo annotation. We offer a taxonomy of motivations for annotation in this system along two dimensions (sociality and function), and explore the various factors that people consider when tagging their photos. Our findings suggest implications for the design of digital photo organization and sharing applications, as well as other applications that incorporate user-based annotation.
Photomesa: a zoomable image browser using quantum treemaps and bubblemaps
- In Proceedings of the 14th annual ACM symposium on User interface software and technology
, 2001
"... PhotoMesa is a zoomable image browser that uses a novel treemap algorithm to present large numbers of images grouped by directory, or other available metadata. It uses a new interaction technique for zoomable user interfaces designed for novices and family use that makes it straightforward to naviga ..."
Abstract
-
Cited by 209 (20 self)
- Add to MetaCart
(Show Context)
PhotoMesa is a zoomable image browser that uses a novel treemap algorithm to present large numbers of images grouped by directory, or other available metadata. It uses a new interaction technique for zoomable user interfaces designed for novices and family use that makes it straightforward to navigate through the space of images, and impossible to get lost. PhotoMesa groups images using one of two new algorithms that lay out groups of objects in a 2D space-filling manner. Quantum treemaps are designed for laying out images or other objects of indivisible (quantum) size. They are a variation on existing treemap algorithms in that they guarantee that every generated rectangle will have a width and height that are an integral multiple of an input object size. Bubblemaps also fill space with groups of quantumsized objects, but generate non-rectangular blobs, and utilize space more efficiently.
Give and take: A study of consumer photo-sharing culture and practice.
- Proc. CHI
, 2007
"... ABSTRACT In this paper, we present initial findings from the study of a digital photo-sharing website: Flickr.com. In particular, we argue that Flickr.com appears to support-for some people-a different set of photography practices, socialization styles, and perspectives on privacy that are unlike t ..."
Abstract
-
Cited by 90 (0 self)
- Add to MetaCart
ABSTRACT In this paper, we present initial findings from the study of a digital photo-sharing website: Flickr.com. In particular, we argue that Flickr.com appears to support-for some people-a different set of photography practices, socialization styles, and perspectives on privacy that are unlike those described in previous research on consumer and amateur photographers. Further, through our examination of digital photographers' photowork activities-organizing, finding, sharing and receiving-we suggest that privacy concerns and lack of integration with existing communication channels have the potential to prevent the 'Kodak Culture' from fully adopting current photo-sharing solutions.
Fluid Interaction Techniques for the Control and Annotation of Digital Video
- UIST ’03 VANCOUVER, BC, CANADA
, 2003
"... We explore a variety of interaction and visualization techniques for fluid navigation, segmentation, linking, and annotation of digital videos. These techniques are developed within a concept prototype called LEAN that is designed for use with pressure-sensitive digitizer tablets. These techniques i ..."
Abstract
-
Cited by 74 (11 self)
- Add to MetaCart
We explore a variety of interaction and visualization techniques for fluid navigation, segmentation, linking, and annotation of digital videos. These techniques are developed within a concept prototype called LEAN that is designed for use with pressure-sensitive digitizer tablets. These techniques include a transient position+velocity widget that allows users not only to move around a point of interest on a video, but also to rewind or fast forward at a controlled variable speed. We also present a new variation of fish-eye views called twist-lens, and incorporate this into a position control slider designed for the effective navigation and viewing of large sequences of video frames. We also explore a new style of widgets that exploit the use of the pen’s pressure-sensing capability, increasing the input vocabulary available to the user. Finally, we elaborate on how annotations referring to objects that are temporal in nature, such as video, may be thought of as links, and fluidly constructed, visualized and navigated.
Leveraging context to resolve identity in photo albums
- In JCDL ’05: Proceedings of the 5th ACM/IEEE-CS joint conference on Digital libraries
, 2005
"... Our system suggests likely identity labels for photographs in a personal photo collection. Instead of using face recognition techniques, the system leverages automatically available context, like the time and location where the photos were taken. Based on time and location, the system automatically ..."
Abstract
-
Cited by 63 (2 self)
- Add to MetaCart
(Show Context)
Our system suggests likely identity labels for photographs in a personal photo collection. Instead of using face recognition techniques, the system leverages automatically available context, like the time and location where the photos were taken. Based on time and location, the system automatically computes event and location groupings of photos. As the user annotates some of the identities of people in their collection, patterns of re-occurrence and co-occurrence of different people in different locations and events emerge. The system uses these patterns to generate label suggestions for identities that were not yet annotated. These suggestions can greatly accelerate the process of manual annotation and improve the quality of retrieval from the collection. We obtained ground-truth identity annotation for four different photo albums, and used them to test our system. The system proved effective, making very accurate label suggestions, even when the number of suggestions for each photo was limited to five names, and even when only a small subset of the photos was annotated.
Semi-automatic image annotation
- In Proc. of Interact 2001: Conference on Human-Computer Interaction
, 2001
"... Abstract: A novel approach to semi-automatically and progressively annotating images with keywords is presented. The progressive annotation process is embedded in the course of integrated keyword-based and content-based image retrieval and user feedback. When the user submits a keyword query and the ..."
Abstract
-
Cited by 57 (3 self)
- Add to MetaCart
Abstract: A novel approach to semi-automatically and progressively annotating images with keywords is presented. The progressive annotation process is embedded in the course of integrated keyword-based and content-based image retrieval and user feedback. When the user submits a keyword query and then provides relevance feedback, the search keywords are automatically added to the images that receive positive feedback and can then facilitate keyword-based image retrieval in the future. The coverage and quality of image annotation in such a database system is improved progressively as the cycle of search and feedback increases. The strategy of semi-automatic image annotation is better than manual annotation in terms of efficiency and better than automatic annotation in terms of accuracy. A performance study is presented which shows that high annotation coverage can be achieved with this approach, and a preliminary user study is described showing that users view annotations as important and will likely use them in image retrieval. The user study also suggested user interface enhancements needed to support relevance feedback. We believe that similar approaches could also be applied to annotating and managing other forms of multimedia objects.
Context data in geo-referenced digital photo collections
- In Proceedings of the 12th annual ACM International Conference on Multimedia
, 2004
"... Given time and location information about digital photographs we can automatically generate an abundance of related contextual metadata, using off-the-shelf and Web-based data sources. Among these are the local daylight status and weather conditions at the time and place a photo was taken. This meta ..."
Abstract
-
Cited by 52 (3 self)
- Add to MetaCart
(Show Context)
Given time and location information about digital photographs we can automatically generate an abundance of related contextual metadata, using off-the-shelf and Web-based data sources. Among these are the local daylight status and weather conditions at the time and place a photo was taken. This metadata has the potential of serving as memory cues and filters when browsing photo collections, especially as these collections grow into the tens of thousands and span dozens of years. We describe the contextual metadata that we automatically assemble for a photograph, given time and location, as well as a browser interface that utilizes that metadata. We then present the results of a user study and a survey that together expose which categories of contextual metadata are most useful for recalling and finding photographs. We identify among still unavailable metadata categories those that are most promising to develop next.
From where to what: Metadata sharing for digital photographs with geographic coordinates
- In 10th International Conference on Cooperative Information Systems (CoopIS
, 2003
"... Abstract. We describe LOCALE, a system that allows cooperating information systems to share labels for photographs. Participating photographs are enhanced with a geographic location stamp – the latitude and longitude where the photograph was taken. For a photograph with no label, LOCALE can use the ..."
Abstract
-
Cited by 45 (14 self)
- Add to MetaCart
Abstract. We describe LOCALE, a system that allows cooperating information systems to share labels for photographs. Participating photographs are enhanced with a geographic location stamp – the latitude and longitude where the photograph was taken. For a photograph with no label, LOCALE can use the shared information to assign a label based on other photographs that were taken in the same area. LOCALE thus allows (i) text search over unlabeled sets of photos, and (ii) automated label suggestions for unlabeled photos. We have implemented a LOCALE prototype where users cooperate in submitting labels and locations, enhancing search quality for all users in the system. We ran an experiment to test the system in centralized and distributed settings. The results show that the system performs search tasks with surprising accuracy, even when searching for specific landmarks. 1
TeamSearch: Comparing Techniques for Co-Present Collaborative Search of Digital Media
- IEEE Tabletop
, 2006
"... Interactive tables can enhance small-group colocated collaborative work in many domains. One application enabled by this new technology is copresent, collaborative search for digital content. For example, a group of students could sit around an interactive table and search for digital images to use ..."
Abstract
-
Cited by 44 (5 self)
- Add to MetaCart
(Show Context)
Interactive tables can enhance small-group colocated collaborative work in many domains. One application enabled by this new technology is copresent, collaborative search for digital content. For example, a group of students could sit around an interactive table and search for digital images to use in a report. We have developed TeamSearch, an application that enables this type of activity by supporting group specification of Boolean-style queries. We explore whether TeamSearch should consider all group members ’ activities as contributing to a single query or should interpret them as separate, parallel search requests. The results reveal that both strategies are similarly efficient, but that collective query formation has advantages in terms of enhancing group collaboration and awareness, allowing users to bootstrap query-specification skills, and personal preference. This suggests that team-centric UIs may offer benefits beyond the “staples ” of efficiency and result quality that are usually considered when designing search interfaces. 1.
X.: Easyalbum: an interactive photo annotation system based on face clustering and re-ranking
- In: SIGCHI
"... Digital photo management is becoming indispensable for the explosively growing family photo albums due to the rapid popularization of digital cameras and mobile phone cameras. In an effective photo management system photo annotation is the most challenging task. In this paper, we develop several inn ..."
Abstract
-
Cited by 38 (5 self)
- Add to MetaCart
(Show Context)
Digital photo management is becoming indispensable for the explosively growing family photo albums due to the rapid popularization of digital cameras and mobile phone cameras. In an effective photo management system photo annotation is the most challenging task. In this paper, we develop several innovative interaction techniques for semi-automatic photo annotation. Compared with traditional annotation systems, our approach provides the following new features: “cluster annotation ” puts similar faces or photos with similar scene together, and enables user label them in one operation; “contextual re-ranking ” boosts the labeling productivity by guessing the user intention; “ad hoc annotation ” allows user label photos while they are browsing or searching, and improves system performance progressively through learning propagation. Our results show that these technologies provide a more user friendly interface for the annotation of person name, location, and event, and thus substantially improve the annotation performance especially for a large photo album.