• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Crowdsourcing step-by-step information extraction to enhance existing how-to videos. (2014)

by J Kim, P Nguyen, S Weir, P J Guo, R C Miller, K Z Gajos
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 10

Understanding in-video dropouts and interaction peaks in online lecture videos

by Juho Kim, Philip J. Guo, Daniel T. Seaton, Piotr Mitros, Krzysztof Z. Gajos, Robert C. Miller - In L@S ’14 , 2014
"... With thousands of learners watching the same online lec-ture videos, analyzing video watching patterns provides a unique opportunity to understand how students learn with videos. This paper reports a large-scale analysis of in-video dropout and peaks in viewership and student activity, using second- ..."
Abstract - Cited by 10 (2 self) - Add to MetaCart
With thousands of learners watching the same online lec-ture videos, analyzing video watching patterns provides a unique opportunity to understand how students learn with videos. This paper reports a large-scale analysis of in-video dropout and peaks in viewership and student activity, using second-by-second user interaction data from 862 videos in four Massive Open Online Courses (MOOCs) on edX. We find higher dropout rates in longer videos, re-watching ses-sions (vs first-time), and tutorials (vs lectures). Peaks in re-watching sessions and play events indicate points of interest and confusion. Results show that tutorials (vs lectures) and re-watching sessions (vs first-time) lead to more frequent and sharper peaks. In attempting to reason why peaks occur by sampling 80 videos, we observe that 61 % of the peaks ac-company visual transitions in the video, e.g., a slide view to a classroom view. Based on this observation, we identify five student activity patterns that can explain peaks: starting from the beginning of a new material, returning to missed content, following a tutorial step, replaying a brief segment, and re-peating a non-visual explanation. Our analysis has design implications for video authoring, editing, and interface de-sign, providing a richer understanding of video learning on MOOCs. Author Keywords Video analysis; in-video dropout; interaction peaks; online
(Show Context)

Citation Context

... to automatically mark steps in a video, making it easy for students to non-sequentially access these points without having to rely on imprecise scrubbing. Tutorial video interfaces such as ToolScape =-=[14]-=- adds an interactive timeline below a video to allow step-by-step navigation. [interface] Provide interactive links and screenshots for highlights. Type 2 peaks suggest that missing content forces stu...

Learnersourcing Subgoal Labels for How-to Videos

by Sarah A. Weir
"... Websites like YouTube provide an easy way to watch the billions of how-to videos on the web, but the interfaces are not optimized for learning. Previous research suggests that users learn more from how-to videos when the information from the video is presented in outline form, with individual steps ..."
Abstract - Cited by 8 (2 self) - Add to MetaCart
Websites like YouTube provide an easy way to watch the billions of how-to videos on the web, but the interfaces are not optimized for learning. Previous research suggests that users learn more from how-to videos when the information from the video is presented in outline form, with individual steps and labels for groups of steps (subgoals) shown. We intend to create an alternative video viewer where the steps and subgoals are displayed alongside the video. In order to generate this information we propose a learnersourcing approach where we gather useful information from people trying to actively learn from a video. We believe learnersourcing is a sustainable and constructive method for enhancing educational material. To demonstrate this method, we created a workflow that encourages users to contribute and refine subgoals for a given how-to video. Users in our pilot study of three videos were able to generate subgoals comparable to those created by the author, which suggests that learnersourcing may be a viable approach. Author Keywords Video tutorials; how-to videos; subgoals; video annotation. ACM Classification Keywords
(Show Context)

Citation Context

...ssgenerate can be dragged around, the steps are fixed.sThis workflow is agnostic to the method used tosgenerate the steps, but crowdsourced methods such assin the work of Kim et al. and Nguyen et al. =-=[8, 11]-=- havesalready proven effective. Every thirty seconds, thesvideo stops and learners are asked, “What was thesoverall goal of the video section you just watched?”s(Figure 3(a)). The answer to this quest...

Data-Driven Interaction Techniques for Improving Navigation of Educational Videos

by Juho Kim, Philip J. Guo, Krzysztof Z. Gajos, Robert C. Miller
"... With an unprecedented scale of learners watching educational videos on online platforms such as MOOCs and YouTube, there is an opportunity to incorporate data generated from their interactions into the design of novel video interaction techniques. Interaction data has the potential to help not only ..."
Abstract - Cited by 5 (3 self) - Add to MetaCart
With an unprecedented scale of learners watching educational videos on online platforms such as MOOCs and YouTube, there is an opportunity to incorporate data generated from their interactions into the design of novel video interaction techniques. Interaction data has the potential to help not only instructors to improve their videos, but also to enrich the learning experience of educational video watchers. This paper explores the design space of data-driven interaction techniques for educational video navigation. We introduce a set of techniques that augment existing video interface widgets, including: a 2D video timeline with an embedded visualization of collective navigation traces; dynamic and non-linear timeline scrubbing; data-enhanced transcript search and keyword summary; automatic display of relevant still frames next to the video; and a visual summary representing points with high learner activity. To evaluate the feasibility of the techniques, we ran a laboratory user study with simulated learning tasks. Participants rated watching lecture videos with interaction data to be efficient and useful in completing the tasks. However, no significant differences were found in task performance, suggesting that interaction data may not always align with moment-by-moment information needs during the tasks.
(Show Context)

Citation Context

...ns about completing a specific task. Existing systems reveal step-by-step structure by adding rich signals to the video timeline, such as tool usage and intermediate results in graphical applications =-=[8, 21]-=-. Classroom lecture videos tend to be less structured than how-to videos, which makes capturing clear structural signals harder. We instead turn to interaction data that is automatically logged for le...

InterTwine: Creating Interapplication Information Scent to Support Coordinated Use of Software

by Adam Fourney, Ben Lafreniere, Parmit Chilana, Michael Terry - Proc UIST , 2014
"... Users often make continued and sustained use of online re-sources to complement use of a desktop application. For ex-ample, users may reference online tutorials to recall how to perform a particular task. While often used in a coordinated fashion, the browser and desktop application provide sepa-rat ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
Users often make continued and sustained use of online re-sources to complement use of a desktop application. For ex-ample, users may reference online tutorials to recall how to perform a particular task. While often used in a coordinated fashion, the browser and desktop application provide sepa-rate, independent mechanisms for helping users find and re-find task-relevant information. In this paper, we describe In-terTwine, a system that links information in the web browser with relevant elements in the desktop application to create in-terapplication information scent. This explicit link produces a shared interapplication history to assist in re-finding in-formation in both applications. As an example, InterTwine marks all menu items in the desktop application that are cur-rently mentioned in the front-most web page. This paper introduces the notion of interapplication information scent, demonstrates the concept in InterTwine, and describes results from a formative study suggesting the utility of the concept. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org.
(Show Context)

Citation Context

...owser, video timestamps could be extracted and synchronized with application command invocations, allowing search engines to index into videos, and enabling capabilities similar to those described in =-=[13, 15, 16]-=-. Beyond indexing web documents and videos, aggregate data could also be applied to command recommendation systems (e.g., [19]), and related projects (e.g.,[7, 18]). Interapplication Information Scent...

Video lens: rapid playback and exploration of large video collections and associated metadata

by Justin Matejka , Tovi Grossman , George Fitzmaurice - In UIST’14 , 2014
"... ABSTRACT We present Video Lens, a framework which allows users to visualize and interactively explore large collections of videos and associated metadata. The primary goal of the framework is to let users quickly find relevant sections within the videos and play them back in rapid succession. The i ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
ABSTRACT We present Video Lens, a framework which allows users to visualize and interactively explore large collections of videos and associated metadata. The primary goal of the framework is to let users quickly find relevant sections within the videos and play them back in rapid succession. The individual UI elements are linked and highly interactive, supporting a faceted search paradigm and encouraging exploration of the data set. We demonstrate the capabilities and specific scenarios of Video Lens within the domain of professional baseball videos. A user study with 12 participants indicates that Video Lens efficiently supports a diverse range of powerful yet desirable video query tasks, while a series of interviews with professionals in the field demonstrates the framework's benefits and future potential.
(Show Context)

Citation Context

... over whichsvideo clips are presented.sSome work has looked at generating metadata for video filessby automatically analyzing audio and video streams [1, 13,s16, 22], as well as through crowdsourcing =-=[9]-=-. The VideosLens system assumes that videos already have a set of richstimeline metadata, but a system to automatically extract timesstamped metadata would be a useful companion.sFaceted SearchsThe id...

unknown title

by unknown authors
"... ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...MES supports written and verbal self-explanations. Previous research has also shown that interactive videos supporting random access and self-paced browsing improve learning [35] and task performance =-=[17]-=-. Tutored Videotape Instruction [15] reported improved learning when pausingsvideos every few minutes for discussion. RIMES supports such interactive activities for online videos. VITAL [26] allows st...

unknown title

by unknown authors
"... ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...MES supports written and verbal self-explanations. Previous research has also shown that interactive videos supporting random access and self-paced browsing improve learning [35] and task performance =-=[17]-=-. Tutored Videotape Instruction [15] reported improved learning when pausingsvideos every few minutes for discussion. RIMES supports such interactive activities for online videos. VITAL [26] allows st...

Input Video + Steps Stage 1: Subgoal Generation Stage 2: Subgoal Evaluation

by Sarah Weir, Juho Kim, Krzysztof Z. Gajos, Robert C. Miller, Stage Subgoal Proofreading, Output Subgoalsworkflow
"... Figure 1. We present a three-stage workflow to generate labels for groups of steps (subgoal labels) for how-to videos. This workflow is designed to engage people actively trying to learn from the video to contribute the information. Websites like YouTube host millions of how-to videos, but their int ..."
Abstract - Add to MetaCart
Figure 1. We present a three-stage workflow to generate labels for groups of steps (subgoal labels) for how-to videos. This workflow is designed to engage people actively trying to learn from the video to contribute the information. Websites like YouTube host millions of how-to videos, but their interfaces are not optimized for learning. Previous re-search suggests that people learn more from how-to videos when the videos are accompanied by outlines showing indi-vidual steps and labels for groups of steps (subgoals). We envision an alternative video player where the steps and sub-goals are displayed alongside the video. To generate this in-formation for existing videos, we introduce learnersourcing, an approach in which intrinsically motivated learners con-tribute to a human computation workflow as they naturally go about learning from the videos. To demonstrate this method, we deployed a live website with a workflow for constructing subgoal labels implemented on a set of introductory web pro-gramming videos. For the four videos with the highest par-ticipation, we found that a majority of learner-generated sub-goals were comparable in quality to expert-generated ones. Learners commented that the system helped them grasp the material, suggesting that our workflow did not detract from the learning experience.
(Show Context)

Citation Context

...se interfaces focus on ways to visualize and discover the most suitable tutorials [17, 23], while other systems enhance tutorial content by building on the metadata available in software applications =-=[6, 9, 16, 24]-=-. The success of these systems in helping people learn to use software suggests that using existing information (such as the transcript to a tutorial video) How to Make a Cake Combine the dry ingredie...

General Terms

by Brian Burg, Andrew J. Ko, Michael D. Ernst
"... Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is chal-lenging because developers must find and connect all of the non-local interactions between event-based JavaScript c ..."
Abstract - Add to MetaCart
Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is chal-lenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior. The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry cap-tures the rendering engine’s inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior’s implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.
(Show Context)

Citation Context

...wses internal states by selecting the corresponding screenshots that each internal state produces. Figure 3(b) shows how Scry visualizes an element’s output history using a familiar timeline metaphor =-=[14, 16]-=-. This output-based, example-first design is in contrast to the traditional tooling emphasis on static, textual program representations. During feature location tasks, browsing program states via outp...

Video Digests: A Browsable, Skimmable Format for Informational Lecture Videos

by Amy Pavel, Colorado Reed, Maneesh Agrawala
"... Increasingly, authors are publishing long informational talks, lectures, and distance-learning videos online. However, it is difficult to browse and skim the content of such videos us-ing current timeline-based video players. Video digests are a new format for informational videos that afford browsi ..."
Abstract - Add to MetaCart
Increasingly, authors are publishing long informational talks, lectures, and distance-learning videos online. However, it is difficult to browse and skim the content of such videos us-ing current timeline-based video players. Video digests are a new format for informational videos that afford browsing and skimming by segmenting videos into a chapter/section struc-ture and providing short text summaries and thumbnails for each section. Viewers can navigate by reading the summaries and clicking on sections to access the corresponding point in the video. We present a set of tools to help authors create such digests using transcript-based interactions. With our tools, authors can manually create a video digest from scratch, or they can automatically generate a digest by applying a combi-nation of algorithmic and crowdsourcing techniques and then manually refine it as needed. Feedback from first-time users suggests that our transcript-based authoring tools and auto-mated techniques greatly facilitate video digest creation. In an evaluative crowdsourced study we find that given a short viewing time, video digests support browsing and skimming better than timeline-based or transcript-based video players.
(Show Context)

Citation Context

...at encourages dividing informative presentations into short, topically-coherent video segments which, as indicated by prior work, aids knowledge transfer and decreases dropouts for educational videos =-=[32, 20, 26]-=-. However, creating a video digest is a time-consuming process: authors must segment the videos at multiple granularities (chapter/section), compose section summaries, select representative keyframes,...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University