Results 1 - 10
of
15
DemoCut: Generating Concise Instructional Videos for Physical Demonstrations
"... Amateur instructional videos often show a single uninterrupted take of a recorded demonstration without any edits. While easy to produce, such videos are often too long as they include unnecessary or repetitive actions as well as mistakes. We introduce DemoCut, a semi-automatic video editing system ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Amateur instructional videos often show a single uninterrupted take of a recorded demonstration without any edits. While easy to produce, such videos are often too long as they include unnecessary or repetitive actions as well as mistakes. We introduce DemoCut, a semi-automatic video editing system that improves the quality of amateur instructional videos for physical tasks. DemoCut asks users to mark key moments in a recorded demonstration using a set of marker types derived from our formative study. Based on these markers, the system uses audio and video analysis to automatically organize the video into meaningful segments and apply appropriate video editing effects. To understand the effectiveness of DemoCut, we report a technical evaluation of seven video tutorials created with DemoCut. In a separate user evaluation, all eight participants successfully created a complete tutorial with a variety of video editing effects using our system.
H.: Automated video looping with progressive dynamism
- ACM Transactions on Graphics
, 2013
"... Given a short video we create a representation that captures a spectrum of looping videos with varying levels of dynamism, ranging from a static image to a highly animated loop. In such a progressively dynamic video, scene liveliness can be adjusted interactively using a slider control. Applications ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
Given a short video we create a representation that captures a spectrum of looping videos with varying levels of dynamism, ranging from a static image to a highly animated loop. In such a progressively dynamic video, scene liveliness can be adjusted interactively using a slider control. Applications include background images and slideshows, where the desired level of activity may depend on personal taste or mood. The representation also provides a segmentation of the scene into independently looping regions, enabling interactive local adjustment over dynamism. For a landscape scene, this control might correspond to selective animation and deanimation of grass motion, water ripples, and swaying trees. Converting arbitrary video to looping content is a challenging research problem. Unlike prior work, we explore an optimization in which each pixel automatically determines its own looping period. The resulting nested segmentation of static and dynamic scene regions forms an extremely compact representation.
Draco: Bringing Life to Illustrations with Kinetic Textures
"... Figure 1. A dynamic illustration authored with Draco, capturing the living qualities of a moment with continuous dynamic phenomena, yet exhibiting the unique timeless nature of a still picture. This animated figure is best viewed in Adobe Reader. We present Draco, a sketch-based interface that allow ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Figure 1. A dynamic illustration authored with Draco, capturing the living qualities of a moment with continuous dynamic phenomena, yet exhibiting the unique timeless nature of a still picture. This animated figure is best viewed in Adobe Reader. We present Draco, a sketch-based interface that allows artists and casual users alike to add a rich set of animation effects to their drawings, seemingly bringing illustrations to life. While previous systems have introduced sketch-based animations for individual objects, our contribution is a unified framework of motion controls that allows users to seamlessly add coordinated motions to object collections. We propose a framework built around kinetic textures, which provide continuous animation effects while preserving the unique timeless nature of still illustrations. This enables many dynamic effects difficult or not possible with previous sketch-based tools, such as a school of fish swimming, tree leaves blowing in the wind, or water rippling in a pond. We describe our implementation and illustrate the repertoire of animation effects it supports. A user study with professional animators and casual users demonstrates the variety of animations, applications and creative possibilities our tool provides. Author Keywords Sketching; animation; kinetic textures; direct manipulation.
A tool for automatic cinemagraphs
- In Proceedings of the 20th ACM international conference on Multimedia (2012), MM ’12
, 2013
"... All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. Available from: Mei-Chen Yeh ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. Available from: Mei-Chen Yeh
Smooth Loops from Unconstrained Video
"... Converting unconstrained video sequences into videos that loop seamlessly is an extremely challenging problem. In this work, we take the first steps towards automating this process by focusing on an important subclass of videos containing a single dominant foreground object. Our technique makes two ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Converting unconstrained video sequences into videos that loop seamlessly is an extremely challenging problem. In this work, we take the first steps towards automating this process by focusing on an important subclass of videos containing a single dominant foreground object. Our technique makes two novel contributions over previous work: first, we propose a correspondence-based similarity metric to automatically identify a good transition point in the video where the appearance and dynamics of the foreground are most consistent. Second, we develop a technique that aligns both the foreground and background about this transition point using a combination of global camera path planning and patch-based video morphing. We demonstrate that this allows us to create natural, compelling, loopy videos from a wide range of videos collected from the internet. 1.
Real-Time Hyperlapse Creation via Optimal Frame Selection
"... Naive hyperlapse Our approach Mean Standard Deviation Naive Ours Figure 1: Hand-held videos often exhibit significant semi-regular high-frequency camera motion due to, for example, running (dotted blue line). This example shows how a naive 8x hyperlapse (i.e., keeping 1 out every 8 frames) results ..."
Abstract
- Add to MetaCart
Naive hyperlapse Our approach Mean Standard Deviation Naive Ours Figure 1: Hand-held videos often exhibit significant semi-regular high-frequency camera motion due to, for example, running (dotted blue line). This example shows how a naive 8x hyperlapse (i.e., keeping 1 out every 8 frames) results in frames with little overlap that are hard to align (black lines). By allowing small violations of the target skip rate we create hyperlapse videos that are smooth even when there is significant camera motion (pink lines). Optimizing an energy function (color-coded in Middle image) that balances matching the target rate while minimizing frame-to-frame motion results in a set of frames that are then stabilized. (Right) To illustrate the alignment we show the mean and standard deviation of three successive frames (in red box on the Left plot) after stabilization for the naive hyperlapse (Top Right) and our result (Bottom Right) -these show that our selected frames align much better than those from naive selection. Abstract Long videos can be played much faster than real-time by recording only one frame per second or by dropping all but one frame each second, i.e., by creating a timelapse. Unstable hand-held moving videos can be stabilized with a number of recently described methods. Unfortunately, creating a stabilized timelapse, or hyperlapse, cannot be achieved through a simple combination of these two methods. Two hyperlapse methods have been previously demonstrated: one with high computational complexity and one requiring special sensors. We present an algorithm for creating hyperlapse videos that can handle significant high-frequency camera motion and runs in real-time on HD video. Our approach does not require sensor data, thus can be run on videos captured on any camera. We optimally select frames from the input video that best match a desired target speed-up while also resulting in the smoothest possible camera motion. We evaluate our approach using several input videos from a range of cameras and compare these results to existing methods.
Automatic Cinemagraph Portraits
"... Cinemagraphs are a popular new type of visual media that lie in-between photos and video; some parts of the frame are animated and loop seamlessly, while other parts of the frame remain completely still. Cinemagraphs are especially effective for portraits because they capture the nuances of our dyna ..."
Abstract
- Add to MetaCart
Cinemagraphs are a popular new type of visual media that lie in-between photos and video; some parts of the frame are animated and loop seamlessly, while other parts of the frame remain completely still. Cinemagraphs are especially effective for portraits because they capture the nuances of our dynamic facial expressions. We present a completely automatic algorithm for generating portrait cinemagraphs from a short video captured with a hand-held camera. Our algorithm uses a combination of face tracking and point tracking to segment face motions into two classes: gross, large-scale motions that should be removed from the video, and dynamic facial expressions that should be preserved. This segmentation informs a spatially-varying warp that removes the large-scale motion, and a graph-cut segmentation of the frame into dynamic and still regions that preserves the finer-scale facial expression motions. We demonstrate the success of our method with a variety of results and a comparison to previous work.
Fast Computation of Seamless Video Loops
"... Short looping videos concisely capture the dynamism of natural scenes. Creating seamless loops usually involves maximizing spatiotemporal consistency and applying Poisson blending. We take an end-to-end view of the problem and present new techniques that jointly improve loop quality while also signi ..."
Abstract
- Add to MetaCart
Short looping videos concisely capture the dynamism of natural scenes. Creating seamless loops usually involves maximizing spatiotemporal consistency and applying Poisson blending. We take an end-to-end view of the problem and present new techniques that jointly improve loop quality while also significantly reducing processing time. A key idea is to relax the consistency constraints to anticipate the subsequent blending, thereby enabling looping of low-frequency content like moving clouds and changing illumination. We also analyze the input video to remove an undesired bias toward short loops. The quality gains are demonstrated visually and confirmed quantitatively using a new gradient-domain consistency metric. We improve system performance by classifying potentially loopable pixels, masking the 2D graph cut, pruning graph-cut labels based on dominant periods, and optimizing on a coarse grid while retaining finer detail. Together these techniques reduce computation times from tens of minutes to nearly real-time.
Juxtapoze: Supporting Serendipity and Creative Expression in Clipart Compositions
"... Figure 1. The Juxtapoze workflow (from left to right): (1) exploring the shape database using scribbled input; (2) edit a selected shape using standard drawing operations; (3) drag the shape into position on the canvas; (4) compose it with other shapes; and (5) repeat for a full drawing. Juxtapoze i ..."
Abstract
- Add to MetaCart
(Show Context)
Figure 1. The Juxtapoze workflow (from left to right): (1) exploring the shape database using scribbled input; (2) edit a selected shape using standard drawing operations; (3) drag the shape into position on the canvas; (4) compose it with other shapes; and (5) repeat for a full drawing. Juxtapoze is a clipart composition workflow that supports cre-ative expression and serendipitous discoveries in the shape domain. We achieve creative expression by supporting a workflow of searching, editing, and composing: the user queries the shape database using strokes, selects the desired search result, and finally modifies the selected image before composing it into the overall drawing. Serendipitous dis-covery of shapes is facilitated by allowing multiple explo-ration channels, such as doodles, shape filtering, and relaxed search. Results from a qualitative evaluation show that Jux-tapoze makes the process of creating image compositions en-joyable and supports creative expression and serendipity. ACM Classification Keywords
LACES: Live Authoring through Compositing and Editing of Streaming Video
, 2014
"... HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract
- Add to MetaCart
(Show Context)
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.