Results 1 - 10
of
10
Understanding in-video dropouts and interaction peaks in online lecture videos
- In L@S ’14
, 2014
"... With thousands of learners watching the same online lec-ture videos, analyzing video watching patterns provides a unique opportunity to understand how students learn with videos. This paper reports a large-scale analysis of in-video dropout and peaks in viewership and student activity, using second- ..."
Abstract
-
Cited by 10 (2 self)
- Add to MetaCart
(Show Context)
With thousands of learners watching the same online lec-ture videos, analyzing video watching patterns provides a unique opportunity to understand how students learn with videos. This paper reports a large-scale analysis of in-video dropout and peaks in viewership and student activity, using second-by-second user interaction data from 862 videos in four Massive Open Online Courses (MOOCs) on edX. We find higher dropout rates in longer videos, re-watching ses-sions (vs first-time), and tutorials (vs lectures). Peaks in re-watching sessions and play events indicate points of interest and confusion. Results show that tutorials (vs lectures) and re-watching sessions (vs first-time) lead to more frequent and sharper peaks. In attempting to reason why peaks occur by sampling 80 videos, we observe that 61 % of the peaks ac-company visual transitions in the video, e.g., a slide view to a classroom view. Based on this observation, we identify five student activity patterns that can explain peaks: starting from the beginning of a new material, returning to missed content, following a tutorial step, replaying a brief segment, and re-peating a non-visual explanation. Our analysis has design implications for video authoring, editing, and interface de-sign, providing a richer understanding of video learning on MOOCs. Author Keywords Video analysis; in-video dropout; interaction peaks; online
Learnersourcing Subgoal Labels for How-to Videos
"... Websites like YouTube provide an easy way to watch the billions of how-to videos on the web, but the interfaces are not optimized for learning. Previous research suggests that users learn more from how-to videos when the information from the video is presented in outline form, with individual steps ..."
Abstract
-
Cited by 8 (2 self)
- Add to MetaCart
(Show Context)
Websites like YouTube provide an easy way to watch the billions of how-to videos on the web, but the interfaces are not optimized for learning. Previous research suggests that users learn more from how-to videos when the information from the video is presented in outline form, with individual steps and labels for groups of steps (subgoals) shown. We intend to create an alternative video viewer where the steps and subgoals are displayed alongside the video. In order to generate this information we propose a learnersourcing approach where we gather useful information from people trying to actively learn from a video. We believe learnersourcing is a sustainable and constructive method for enhancing educational material. To demonstrate this method, we created a workflow that encourages users to contribute and refine subgoals for a given how-to video. Users in our pilot study of three videos were able to generate subgoals comparable to those created by the author, which suggests that learnersourcing may be a viable approach. Author Keywords Video tutorials; how-to videos; subgoals; video annotation. ACM Classification Keywords
Data-Driven Interaction Techniques for Improving Navigation of Educational Videos
"... With an unprecedented scale of learners watching educational videos on online platforms such as MOOCs and YouTube, there is an opportunity to incorporate data generated from their interactions into the design of novel video interaction techniques. Interaction data has the potential to help not only ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
(Show Context)
With an unprecedented scale of learners watching educational videos on online platforms such as MOOCs and YouTube, there is an opportunity to incorporate data generated from their interactions into the design of novel video interaction techniques. Interaction data has the potential to help not only instructors to improve their videos, but also to enrich the learning experience of educational video watchers. This paper explores the design space of data-driven interaction techniques for educational video navigation. We introduce a set of techniques that augment existing video interface widgets, including: a 2D video timeline with an embedded visualization of collective navigation traces; dynamic and non-linear timeline scrubbing; data-enhanced transcript search and keyword summary; automatic display of relevant still frames next to the video; and a visual summary representing points with high learner activity. To evaluate the feasibility of the techniques, we ran a laboratory user study with simulated learning tasks. Participants rated watching lecture videos with interaction data to be efficient and useful in completing the tasks. However, no significant differences were found in task performance, suggesting that interaction data may not always align with moment-by-moment information needs during the tasks.
InterTwine: Creating Interapplication Information Scent to Support Coordinated Use of Software
- Proc UIST
, 2014
"... Users often make continued and sustained use of online re-sources to complement use of a desktop application. For ex-ample, users may reference online tutorials to recall how to perform a particular task. While often used in a coordinated fashion, the browser and desktop application provide sepa-rat ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Users often make continued and sustained use of online re-sources to complement use of a desktop application. For ex-ample, users may reference online tutorials to recall how to perform a particular task. While often used in a coordinated fashion, the browser and desktop application provide sepa-rate, independent mechanisms for helping users find and re-find task-relevant information. In this paper, we describe In-terTwine, a system that links information in the web browser with relevant elements in the desktop application to create in-terapplication information scent. This explicit link produces a shared interapplication history to assist in re-finding in-formation in both applications. As an example, InterTwine marks all menu items in the desktop application that are cur-rently mentioned in the front-most web page. This paper introduces the notion of interapplication information scent, demonstrates the concept in InterTwine, and describes results from a formative study suggesting the utility of the concept. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org.
Video lens: rapid playback and exploration of large video collections and associated metadata
- In UIST’14
, 2014
"... ABSTRACT We present Video Lens, a framework which allows users to visualize and interactively explore large collections of videos and associated metadata. The primary goal of the framework is to let users quickly find relevant sections within the videos and play them back in rapid succession. The i ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
ABSTRACT We present Video Lens, a framework which allows users to visualize and interactively explore large collections of videos and associated metadata. The primary goal of the framework is to let users quickly find relevant sections within the videos and play them back in rapid succession. The individual UI elements are linked and highly interactive, supporting a faceted search paradigm and encouraging exploration of the data set. We demonstrate the capabilities and specific scenarios of Video Lens within the domain of professional baseball videos. A user study with 12 participants indicates that Video Lens efficiently supports a diverse range of powerful yet desirable video query tasks, while a series of interviews with professionals in the field demonstrates the framework's benefits and future potential.
Input Video + Steps Stage 1: Subgoal Generation Stage 2: Subgoal Evaluation
"... Figure 1. We present a three-stage workflow to generate labels for groups of steps (subgoal labels) for how-to videos. This workflow is designed to engage people actively trying to learn from the video to contribute the information. Websites like YouTube host millions of how-to videos, but their int ..."
Abstract
- Add to MetaCart
(Show Context)
Figure 1. We present a three-stage workflow to generate labels for groups of steps (subgoal labels) for how-to videos. This workflow is designed to engage people actively trying to learn from the video to contribute the information. Websites like YouTube host millions of how-to videos, but their interfaces are not optimized for learning. Previous re-search suggests that people learn more from how-to videos when the videos are accompanied by outlines showing indi-vidual steps and labels for groups of steps (subgoals). We envision an alternative video player where the steps and sub-goals are displayed alongside the video. To generate this in-formation for existing videos, we introduce learnersourcing, an approach in which intrinsically motivated learners con-tribute to a human computation workflow as they naturally go about learning from the videos. To demonstrate this method, we deployed a live website with a workflow for constructing subgoal labels implemented on a set of introductory web pro-gramming videos. For the four videos with the highest par-ticipation, we found that a majority of learner-generated sub-goals were comparable in quality to expert-generated ones. Learners commented that the system helped them grasp the material, suggesting that our workflow did not detract from the learning experience.
General Terms
"... Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is chal-lenging because developers must find and connect all of the non-local interactions between event-based JavaScript c ..."
Abstract
- Add to MetaCart
(Show Context)
Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is chal-lenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior. The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry cap-tures the rendering engine’s inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior’s implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.
Video Digests: A Browsable, Skimmable Format for Informational Lecture Videos
"... Increasingly, authors are publishing long informational talks, lectures, and distance-learning videos online. However, it is difficult to browse and skim the content of such videos us-ing current timeline-based video players. Video digests are a new format for informational videos that afford browsi ..."
Abstract
- Add to MetaCart
(Show Context)
Increasingly, authors are publishing long informational talks, lectures, and distance-learning videos online. However, it is difficult to browse and skim the content of such videos us-ing current timeline-based video players. Video digests are a new format for informational videos that afford browsing and skimming by segmenting videos into a chapter/section struc-ture and providing short text summaries and thumbnails for each section. Viewers can navigate by reading the summaries and clicking on sections to access the corresponding point in the video. We present a set of tools to help authors create such digests using transcript-based interactions. With our tools, authors can manually create a video digest from scratch, or they can automatically generate a digest by applying a combi-nation of algorithmic and crowdsourcing techniques and then manually refine it as needed. Feedback from first-time users suggests that our transcript-based authoring tools and auto-mated techniques greatly facilitate video digest creation. In an evaluative crowdsourced study we find that given a short viewing time, video digests support browsing and skimming better than timeline-based or transcript-based video players.