Results 1 - 10
of
540
Object Bank: A High-Level Image Representation for Scene Classification & Semantic Feature Sparsification
"... Robust low-level image features have been proven to be effective representations for a variety of visual recognition tasks such as object recognition and scene classification; but pixels, or even local image patches, carry little semantic meanings. For high level visual tasks, such low-level image r ..."
Abstract
-
Cited by 207 (6 self)
- Add to MetaCart
(Show Context)
Robust low-level image features have been proven to be effective representations for a variety of visual recognition tasks such as object recognition and scene classification; but pixels, or even local image patches, carry little semantic meanings. For high level visual tasks, such low-level image representations are potentially not enough. In this paper, we propose a high-level image representation, called the Object Bank, where an image is represented as a scale-invariant response map of a large number of pre-trained generic object detectors, blind to the testing dataset or visual task. Leveraging on the Object Bank representation, superior performances on high level visual recognition tasks can be achieved with simple off-the-shelf classifiers such as logistic regression and linear SVM. Sparsity algorithms make our representation more efficient and scalable for large scene datasets, and reveal semantically meaningful feature patterns. 1
Financial incentives and the “performance of crowds
- Proc. HCOMP ’09
"... The relationship between financial incentives and performance, long of interest to social scientists, has gained new relevance with the advent of web-based “crowd-sourcing ” models of production. Here we investigate the effect of compensation on performance in the context of two experiments, conduct ..."
Abstract
-
Cited by 192 (3 self)
- Add to MetaCart
(Show Context)
The relationship between financial incentives and performance, long of interest to social scientists, has gained new relevance with the advent of web-based “crowd-sourcing ” models of production. Here we investigate the effect of compensation on performance in the context of two experiments, conducted on Amazon’s Mechanical Turk (AMT). We find that increased financial incentives increase the quantity, but not the quality, of work performed by participants, where the difference appears to be due to an “anchoring ” effect: workers who were paid more also perceived the value of their work to be greater, and thus were no more motivated than workers paid less. In contrast with compensation levels, we find the details of the compensation scheme do matter—specifically, a “quota ” system results in better work for less pay than an equivalent “piece rate ” system. Although counterintuitive, these findings are consistent with previous laboratory studies, and may have real-world analogs as well.
Who are the crowdworkers?: shifting demographics in Mechanical Turk
- In Proceedings of CHI 2010, Atlanta GA, ACM
, 2010
"... Amazon Mechanical Turk (MTurk) is a crowdsourcing system in which tasks are distributed to a population of thousands of anonymous workers for completion. This system is increasingly popular with researchers and developers. Here we extend previous studies of the demographics and usage behaviors of MT ..."
Abstract
-
Cited by 127 (3 self)
- Add to MetaCart
(Show Context)
Amazon Mechanical Turk (MTurk) is a crowdsourcing system in which tasks are distributed to a population of thousands of anonymous workers for completion. This system is increasingly popular with researchers and developers. Here we extend previous studies of the demographics and usage behaviors of MTurk workers. We describe how the worker population has changed over time, shifting from a primarily moderate-income, U.S.-based workforce towards an increasingly international group with a significant population of young, well-educated Indian workers. This change in population points to how workers may treat Turking as a full-time job, which they rely on to make ends meet.
Collective knowledge systems: Where the social web meets the semantic web
- Web Semantics: Science, Services and Agents on the World Wide Web
, 2008
"... Abstract: What can happen if we combine the best ideas from the Social Web and Semantic Web? The Social Web is an ecosystem of participation, where value is created by the aggregation of many individual user contributions. The Semantic Web is an ecosystem of data, where value is created by the integ ..."
Abstract
-
Cited by 111 (0 self)
- Add to MetaCart
(Show Context)
Abstract: What can happen if we combine the best ideas from the Social Web and Semantic Web? The Social Web is an ecosystem of participation, where value is created by the aggregation of many individual user contributions. The Semantic Web is an ecosystem of data, where value is created by the integration of structured data from many sources. What applications can best synthesize the strengths of these two approaches, to create a new level of value that is both rich with human participation and powered by well-structured information? This paper proposes a class of applications called collective knowledge systems, which unlock the "collective intelligence " of the Social Web with knowledge representation and reasoning techniques of the Semantic Web.
TurKit: Tools for Iterative Tasks on Mechanical Turk
- In Human Computation Workshop (HComp2009
, 2009
"... Mechanical Turk (MTurk) is an increasingly popular web service for paying people small rewards to do human computation tasks. Current uses of MTurk typically post independent parallel tasks. This paper explores an alternative iterative paradigm, in which workers build on or evaluate each other’s wor ..."
Abstract
-
Cited by 90 (3 self)
- Add to MetaCart
(Show Context)
Mechanical Turk (MTurk) is an increasingly popular web service for paying people small rewards to do human computation tasks. Current uses of MTurk typically post independent parallel tasks. This paper explores an alternative iterative paradigm, in which workers build on or evaluate each other’s work. We describe TurKit, a new toolkit for deploying iterative tasks to MTurk, with a familiar imperative programming paradigm that effectively uses MTurk workers as subroutines, such as the comparison function of a sorting algorithm. The toolkit handles the latency of MTurk tasks (typically measured in minutes), supports parallel tasks, and provides fault tolerance to avoid wasting money and time. We present a variety of iterative experiments using TurKit, including image description, copy editing, handwriting recognition, and sorting. ACM Classification: H5.2 [Information interfaces and
TurKit: Human Computation Algorithms on Mechanical Turk. In:
- Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology.
, 2010
"... ABSTRACT Mechanical Turk provides an on-demand source of human computation. This provides a tremendous opportunity to explore algorithms which incorporate human computation as a function call. However, various systems challenges make this difficult in practice, and most uses of Mechanical Turk post ..."
Abstract
-
Cited by 83 (9 self)
- Add to MetaCart
(Show Context)
ABSTRACT Mechanical Turk provides an on-demand source of human computation. This provides a tremendous opportunity to explore algorithms which incorporate human computation as a function call. However, various systems challenges make this difficult in practice, and most uses of Mechanical Turk post large numbers of independent tasks. TurKit is a toolkit for prototyping and exploring truly algorithmic human computation, while maintaining a straight-forward imperative programming style. We present the crash-andrerun programming model that makes TurKit possible, along with a variety of applications for human computation algorithms. We also present a couple case studies of TurKit used for real experiments outside our lab.
Input-agreement: a new mechanism for collecting data using human computation games.
- In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM
, 2009
"... ABSTRACT Since its introduction at CHI 2004, the ESP Game has inspired many similar games that share the goal of gathering data from players. This paper introduces a new mechanism for collecting labeled data using "games with a purpose." In this mechanism, players are provided with either ..."
Abstract
-
Cited by 78 (4 self)
- Add to MetaCart
(Show Context)
ABSTRACT Since its introduction at CHI 2004, the ESP Game has inspired many similar games that share the goal of gathering data from players. This paper introduces a new mechanism for collecting labeled data using "games with a purpose." In this mechanism, players are provided with either the same or a different object, and asked to describe that object to each other. Based on each other's descriptions, players must decide whether they have the same object or not. We explain why this new mechanism is superior for input data with certain characteristics, introduce an enjoyable new game called "TagATune" that collects tags for music clips via this mechanism, and present findings on the data that is collected by this game.
The labor economics of paid crowdsourcing
- In EC, 2010. Shaili Jain, Yiling
"... Crowdsourcing is a form of “peer production ” in which work traditionally performed by an employee is outsourced to an “undefined, generally large group of people in the form of an open call. ” We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel me ..."
Abstract
-
Cited by 76 (5 self)
- Add to MetaCart
(Show Context)
Crowdsourcing is a form of “peer production ” in which work traditionally performed by an employee is outsourced to an “undefined, generally large group of people in the form of an open call. ” We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a worker’s reservation wage— the smallest wage a worker is willing to accept for a task and the key parameter in our labor supply model. It shows that the reservation wages of a sample of workers from Ama-zon’s Mechanical Turk (AMT) are approximately log nor-mally distributed, with a median wage of $1.38/hour. At the median wage, the point elasticity of extensive labor sup-ply is 0.43. We discuss how to use our calibrated model to make predictions in applied work. Two experimental tests of the model show that many workers respond rationally to offered incentives. However, a non-trivial fraction of sub-jects appear to set earnings targets. These “target earners” consider not just the offered wage—which is what the ra-tional model predicts—but also their proximity to earnings goals. Interestingly, a number of workers clearly prefer earn-ing total amounts evenly divisible by 5, presumably because these amounts make good targets.
Combining human and machine intelligence in large-scale crowdsourcing
- In AAMAS
, 2012
"... We show how machine learning and inference can be harnessed to leverage the complementary strengths of humans and computational agents to solve crowdsourcing tasks. We construct a set of Bayesian predictive models from data and describe how the models operate within an overall crowdsourcing architec ..."
Abstract
-
Cited by 71 (15 self)
- Add to MetaCart
(Show Context)
We show how machine learning and inference can be harnessed to leverage the complementary strengths of humans and computational agents to solve crowdsourcing tasks. We construct a set of Bayesian predictive models from data and describe how the models operate within an overall crowdsourcing architecture that combines the efforts of people and machine vision on the task of classifying celestial bodies defined within a citizens ’ science project named Galaxy Zoo. We show how learned probabilistic models can be used to fuse human and machine contributions and to predict the behaviors of workers. We employ multiple inferences in concert to guide decisions on hiring and routing workers to tasks so as to maximize the efficiency of large-scale crowdsourcing processes based on expected utility.
Exploring iterative and parallel human computation processes.
- In Proceedings of the ACM SIGKDD workshop on human computation,
, 2010
"... ABSTRACT Services like Amazon's Mechanical Turk have opened the door for exploration of processes that outsource computation to humans. These human computation processes hold tremendous potential to solve a variety of problems in novel and interesting ways. However, we are only just beginning ..."
Abstract
-
Cited by 64 (4 self)
- Add to MetaCart
(Show Context)
ABSTRACT Services like Amazon's Mechanical Turk have opened the door for exploration of processes that outsource computation to humans. These human computation processes hold tremendous potential to solve a variety of problems in novel and interesting ways. However, we are only just beginning to understand how to design such processes. This paper explores two basic approaches: one where workers work alone in parallel and one where workers iteratively build on each other's work. We present a series of experiments exploring tradeoffs between each approach in several problem domains: writing, brainstorming, and transcription. In each of our experiments, iteration increases the average quality of responses. The increase is statistically significant in writing and brainstorming. However, in brainstorming and transcription, it is not clear that iteration is the best overall approach, in part because both of these tasks benefit from a high variability of responses, which is more prevalent in the parallel process. Also, poor guesses in the transcription task can lead subsequent workers astray.