Results 1 - 10
of
11
Answering visual questions with conversational crowd assistants
- In Proc. of ASSETS 2013, To Appear
"... This Conference Proceeding is brought to you for free and open access by the School of Computer Science at Research Showcase @ CMU. It has been ..."
Abstract
-
Cited by 9 (8 self)
- Add to MetaCart
(Show Context)
This Conference Proceeding is brought to you for free and open access by the School of Computer Science at Research Showcase @ CMU. It has been
Glance: Rapidly Coding Behavioral Video with the Crowd
"... Behavioral researchers spend considerable amount of time coding video data to systematically extract meaning from subtle human actions and emotions. In this paper, we present Glance, a tool that allows researchers to rapidly query, sam-ple, and analyze large video datasets for behavioral events that ..."
Abstract
-
Cited by 6 (4 self)
- Add to MetaCart
(Show Context)
Behavioral researchers spend considerable amount of time coding video data to systematically extract meaning from subtle human actions and emotions. In this paper, we present Glance, a tool that allows researchers to rapidly query, sam-ple, and analyze large video datasets for behavioral events that are hard to detect automatically. Glance takes advantage of the parallelism available in paid online crowds to inter-pret natural language queries and then aggregates responses in a summary view of the video data. Glance provides ana-lysts with rapid responses when initially exploring a dataset, and reliable codings when refining an analysis. Our experi-ments show that Glance can code nearly 50 minutes of video in 5 minutes by recruiting over 60 workers simultaneously, and can get initial feedback to analysts in under 10 seconds for most clips. We present and compare new methods for accurately aggregating the input of multiple workers mark-ing the spans of events in video data, and for measuring the quality of their coding in real-time before a baseline is estab-lished by measuring the variance between workers. Glance’s rapid responses to natural language queries, feedback regard-ing question ambiguity and anomalies in the data, and ability to build on prior context in followup queries allow users to have a conversation-like interaction with their data – opening up new possibilities for naturally exploring video data.
Using Microtask Continuity to Improve Crowdsourcing
, 2014
"... A rich body of cognitive science literature suggests that workers who focus on a single task in a large workflow leverage task specialization to improve the overall performance of the workflow, such as in an assembly line. However, crowdsourcing workflows often ignore worker growth over time, instea ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
A rich body of cognitive science literature suggests that workers who focus on a single task in a large workflow leverage task specialization to improve the overall performance of the workflow, such as in an assembly line. However, crowdsourcing workflows often ignore worker growth over time, instead treating them as homogeneous computational units that can effortlessly move between small microtasks of different types. In this paper, we validate that workers often mix different task types via a survey, and then study the effects of such task type mixing. We collect empirical evidence from 338 crowd workers that suggests task interruptions significantly decrease worker performance. Specifically, we show that temporal interruptions, where there is a large delay between two tasks, can cause up to a 102 % slowdown in task completion time, and contextual interruptions, where workers are asked to perform different tasks in sequence, can slow down completion time by 57%. Our results demonstrate the importance of considering continuity in workflow design for both individual worker efficiency and overall throughput. 1
Context Trees: Crowdsourcing Global Understanding from Local Views
"... Crowdsourcing struggles when workers must see all of the pieces of input to make an accurate judgment. For exam-ple, to find the most important scenes in a novel or movie, each worker must spend hours consuming the entire plot to acquire a global understanding and then apply that under-standing to e ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Crowdsourcing struggles when workers must see all of the pieces of input to make an accurate judgment. For exam-ple, to find the most important scenes in a novel or movie, each worker must spend hours consuming the entire plot to acquire a global understanding and then apply that under-standing to each local scene. To enable the crowdsourcing of large-scale goals with only local views, we introduce con-text trees, a crowdsourcing workflow for creating global sum-maries of a large input. Context trees recursively combine elements through written summaries to form a tree. Work-ers can then ground their local decisions by applying those summaries back down to the leaf nodes. In the case of scale ratings such as scene importance, we introduce a weighting process that percolates ratings downwards through the tree so that important nodes in unimportant branches are not over-weighted. When using context trees to rate the importance of scenes in a 4000-word story and a 100-minute movie, work-ers ’ ratings are nearly as accurate as those who saw the en-tire input, and much improved over the traditional approach of splitting the input into independent segments. To explore whether context trees enable crowdsourcing to undertake new classes of goals, we also crowdsource the solution to a large hierarchical puzzle of 462,000 interlocking pieces.
H.5.3 Information Interfaces and Presentation (e.g. HCI):
"... We introduce flash teams, a framework for dynamically as-sembling and managing paid experts from the crowd. Flash teams advance a vision of expert crowd work that accom-plishes complex, interdependent goals such as engineering and design. These teams consist of sequences of linked mod-ular tasks and ..."
Abstract
- Add to MetaCart
(Show Context)
We introduce flash teams, a framework for dynamically as-sembling and managing paid experts from the crowd. Flash teams advance a vision of expert crowd work that accom-plishes complex, interdependent goals such as engineering and design. These teams consist of sequences of linked mod-ular tasks and handoffs that can be computationally managed. Interactive systems reason about and manipulate these teams’ structures: for example, flash teams can be recombined to form larger organizations and authored automatically in re-sponse to a user’s request. Flash teams can also hire more people elastically in reaction to task needs, and pipeline in-termediate output to accelerate completion times. To enable flash teams, we present Foundry, an end-user authoring plat-form and runtime manager. Foundry allows users to author modular tasks, then manages teams through handoffs of inter-mediate work. We demonstrate that Foundry and flash teams enable crowdsourcing of a broad class of goals including de-sign prototyping, course development, and film animation, in half the work time of traditional self-managed teams.
Ask the Crowd: Scaffolding Coordination and Knowledge Sharing in Microtask Programming
"... Abstract—Programming work is inherently interdependent, requiring developers to share and coordinate decisions that crosscut the structure of code. This is particularly challenging for programming in a microtasking context, in which developers are assumed to be transient and thus cannot rely on trad ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Programming work is inherently interdependent, requiring developers to share and coordinate decisions that crosscut the structure of code. This is particularly challenging for programming in a microtasking context, in which developers are assumed to be transient and thus cannot rely on traditional learning and coordination mechanisms such as an extended onboarding process and code ownership. In this paper, we ex-plore scaffolding coordination and knowledge sharing through a question and answer system, structuring project knowledge and coordination into questions and answers. To investigate its poten-tial for enabling coordination in a microtask setting, we imple-mented a Q&A system for microtask programming work and conducted a user study where a crowd used it to coordinate their work on a software project over a 30-hour period. The results re-veal both the potential for the use of Q&A systems for within-project coordination and challenges that this approach brings. Keywords—programming environments; crowdsourcing; mi-crotasking; knowledge sharing; question answering I.
Guardian: A Crowd-Powered Spoken Dialog System for Web APIs
"... Natural language dialog is an important and intuitive way for people to access information and services. However, current dialog systems are limited in scope, brittle to the richness of natural language, and expensive to produce. This paper introduces Guardian, a crowd-powered framework that wraps e ..."
Abstract
- Add to MetaCart
Natural language dialog is an important and intuitive way for people to access information and services. However, current dialog systems are limited in scope, brittle to the richness of natural language, and expensive to produce. This paper introduces Guardian, a crowd-powered framework that wraps existing Web APIs into immediately usable spoken dialog systems. Guardian takes as input the Web API and desired task, and the crowd determines the parameters necessary to complete it, how to ask for them, and interprets the responses from the API. The system is structured so that, over time, it can learn to take over for the crowd. This hy-brid systems approach will help make dialog systems both more general and more robust going forward.
Tuning the Diversity of Open-Ended Responses From the Crowd
, 2014
"... Crowdsourcing can solve problems that current fully automated systems can-not. Its effectiveness depends on the reliability, accuracy, and speed of the crowd workers that drive it. These objectives are frequently at odds with one another. For instance, how much time should workers be given to discov ..."
Abstract
- Add to MetaCart
Crowdsourcing can solve problems that current fully automated systems can-not. Its effectiveness depends on the reliability, accuracy, and speed of the crowd workers that drive it. These objectives are frequently at odds with one another. For instance, how much time should workers be given to discover and propose new solutions versus deliberate over those currently proposed? How do we determine if discovering a new answer is appropriate at all? And how do we manage work-ers who lack the expertise or attention needed to provide useful input to a given task? We present a mechanism that uses distinct payoffs for three possible worker actions—propose, vote, or abstain—to provide workers with the necessary incen-tives to guarantee an effective (or even optimal) balance between searching for new answers, assessing those currently available, and, when they have insufficient expertise or insight for the task at hand, abstaining. We provide a novel game the-oretic analysis for this mechanism and test it experimentally on an image-labeling problem and show that it allows a system to reliably control the balance between discovering new answers and converging to existing ones.
Experimental Studies of Human Behavior in Social Computing Systems
, 2015
"... (Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. ..."
Abstract
- Add to MetaCart
(Show Context)
(Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters.
Conversations in the Crowd: Collecting Data for Task-Oriented Dialog Learning
"... A major challenge in developing dialog systems is ob-taining realistic data to train the systems for specific do-mains. We study the opportunity for using crowdsourc-ing methods to collect dialog datasets. Specifically, we introduce ChatCollect, a system that allows researchers to collect conversati ..."
Abstract
- Add to MetaCart
A major challenge in developing dialog systems is ob-taining realistic data to train the systems for specific do-mains. We study the opportunity for using crowdsourc-ing methods to collect dialog datasets. Specifically, we introduce ChatCollect, a system that allows researchers to collect conversations focused around definable tasks from pairs of workers in the crowd. We demonstrate that varied and in-depth dialogs can be collected using this system, then discuss ongoing work on creating a crowd-powered system for parsing semantic frames. We then discuss research opportunities in using this approach to train and improve automated dialog systems in the fu-ture.