Results 1 - 10
of
21
RepFinder: Finding Approximately Repeated Scene Elements for Image Editing
"... Figure 1: Repeated element detection and manipulation. (Left-to-right) Original image with user scribbles to indicate an object template (red) and background (green); repeated instances detected, completed, dense correspondence established, and ordered in layers; fish in the original image replaced ..."
Abstract
-
Cited by 35 (20 self)
- Add to MetaCart
Figure 1: Repeated element detection and manipulation. (Left-to-right) Original image with user scribbles to indicate an object template (red) and background (green); repeated instances detected, completed, dense correspondence established, and ordered in layers; fish in the original image replaced by a different kind of fish from a reference image (top-right inset); rearranged fishes. Repeated elements are ubiquitous and abundant in both manmade and natural scenes. Editing such images while preserving the repetitions and their relations is nontrivial due to overlap, missing parts, deformation across instances, illumination variation, etc. Manually enforcing such relations is laborious and error-prone. We propose a novel framework where user scribbles are used to guide detection and extraction of such repeated elements. Our detection process, which is based on a novel boundary band method, robustly extracts the repetitions along with their deformations. The algorithm only considers the shape of the elements, and ignores similarity based on color, texture, etc. We then use topological sorting to establish a partial depth ordering of overlapping repeated instances. Missing parts on occluded instances are completed using information from other instances. The extracted repeated instances can then be seamlessly edited and manipulated for a variety of high level tasks that are otherwise difficult to perform. We demonstrate the versatility of our framework on a large set of inputs of varying complexity, showing applications to image rearrangement, edit transfer, deformation propagation, and instance replacement. image editing, shape-aware manipulation, edit propa-Keywords: gation
Video inpainting under constrained camera motion
- IEEE Trans. Image Process
, 2007
"... A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: it may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object ..."
Abstract
-
Cited by 33 (1 self)
- Add to MetaCart
(Show Context)
A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: it may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple pre-processing stage and two steps of video inpainting. In the pre-processing stage we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step we reconstruct moving objects in the foreground that are ‘occluded’ by the region to be inpainted. To this end we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled-in by extending spatial texture synthesis techniques to the spatio-temporal domain. The proposed framework has several advantages over state of the art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, is fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered background and the
A Rank Minimization Approach to Video Inpainting
"... This paper addresses the problem of video inpainting, that is seamlessly reconstructing missing portions in a set of video frames. We propose to solve this problem proceeding as follows: (i) finding a set of descriptors that encapsulate the information necessary to reconstruct a frame, (ii) finding ..."
Abstract
-
Cited by 16 (3 self)
- Add to MetaCart
(Show Context)
This paper addresses the problem of video inpainting, that is seamlessly reconstructing missing portions in a set of video frames. We propose to solve this problem proceeding as follows: (i) finding a set of descriptors that encapsulate the information necessary to reconstruct a frame, (ii) finding an optimal estimate of the value of these descriptors for the missing/corrupted frames, and (iii) using the estimated values to reconstruct the frames. The main result of the paper shows that the optimal descriptor estimates can be efficiently obtained by minimizing the rank of a matrix directly constructed from the available data, leading to a simple, computationally attractive, dynamic inpainting algorithm that optimizes the use of spatio/temporal information. Moreover, contrary to most currently available techniques, the method can handle non–periodic target motions, non–stationary backgrounds and moving cameras. These results are illustrated with several examples, including reconstructing dynamic textures and object disocclusion in cases involving both moving targets and camera. 1.
H.: Video completion for perspective camera under constrained motion
- In: Proc. ICIP. Volume
"... This paper presents a novel technique to fill in missing background and moving foreground of a video captured by a static or moving camera. Different from previous efforts which are typically based on processing in the 3D data volume, we slice the volume along the motion manifold of the moving objec ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
(Show Context)
This paper presents a novel technique to fill in missing background and moving foreground of a video captured by a static or moving camera. Different from previous efforts which are typically based on processing in the 3D data volume, we slice the volume along the motion manifold of the moving object, and therefore reduce the search space from 3D to 2D, while still preserve the spatial and temporal coherence. In addition to the computational efficiency, based on geometric video analysis, the proposed approach is also able to handle real videos under perspective distortion, as well as common camera motions, such as panning, tilting, and zooming. The experimental results demonstrate that our algorithm performs comparably to 3D search based methods, and however extends the current state-of-the-art repairing techniques to videos with projective effects, as well as illumination changes. 1
Gradient based image completion by solving the Poisson equation
- COMPUT. GRAPH
, 2007
"... This paper presents a novel gradient-based image completion algorithm for removing significant objects from natural images or photographs. Our method reconstructs the region of removal in two phases. Firstly, the gradient maps in the removed area are completed through a patch based filling algorithm ..."
Abstract
-
Cited by 11 (2 self)
- Add to MetaCart
This paper presents a novel gradient-based image completion algorithm for removing significant objects from natural images or photographs. Our method reconstructs the region of removal in two phases. Firstly, the gradient maps in the removed area are completed through a patch based filling algorithm. After that, the image is reconstructed from the gradient maps by solving a Poisson equation. A new patch-matching criterion is developed in our approach to govern the completed of gradient maps. Both the gradient and the color information are incorporated in this new criterion, so a better image completion result is obtained. Several examples and comparisons are given at the end of the paper to demonstrate the performance of our gradient-based image completion approach.
Virtual contour-guided video object inpainting using posture mapping and retrieval
- IEEE Trans. Multimedia
, 2011
"... Abstract—This paper presents a novel framework for object completion in a video. To complete an occluded object, our method first samples a 3-D volume of the video into directional spatio-temporal slices, and performs patch-based image in-painting to complete the partially damaged object trajectorie ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
Abstract—This paper presents a novel framework for object completion in a video. To complete an occluded object, our method first samples a 3-D volume of the video into directional spatio-temporal slices, and performs patch-based image in-painting to complete the partially damaged object trajectories in the 2-D slices. The completed slices are then combined to obtain a sequence of virtual contours of the damaged object. Next, a posture sequence retrieval technique is applied to the virtual contours to retrieve the most similar sequence of object postures in the available non-occluded postures. Key-posture selection and indexing are used to reduce the complexity of posture sequence retrieval. We also propose a synthetic posture generation scheme that enriches the collection of postures so as to reduce the effect of insufficient postures. Our experiment results demonstrate that the proposed method can maintain the spatial consistency and temporal motion continuity of an object simultaneously. Index Terms—Object completion, posture mapping, posture se-quence retrieval, synthetic posture, video inpainting. I.
Z.: Video falsifying by motion interpolation and inpainting
- In: Proc. IEEE CVPR. (2008) 1–8
"... We change the behavior of actors in a video. For instance, the outcome of a 100-meter race in the Olympic game can be falsified. We track objects and segment motions using a modified mean shift mechanism. The resulting video layers can be played in different speeds and at different reference points ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
We change the behavior of actors in a video. For instance, the outcome of a 100-meter race in the Olympic game can be falsified. We track objects and segment motions using a modified mean shift mechanism. The resulting video layers can be played in different speeds and at different reference points with respect to the original video. In order to obtain a smooth movement of target objects, a motion interpolation mechanism is proposed based on continuous stick figures (i.e., a video of human skeleton) and video inpainting. The video inpainting mechanism is performed in a quasi-3D space via guided 3D patch matching for filling. Interpolated target objects and background layers are fused by using graph cut. It is hard to tell whether a falsified video is the original. We demonstrate the original and the falsified videos in our website at
Efficient object based video inpainting
- ICIP
, 2006
"... Video inpainting is the process of repairing missing regions (holes) in videos. Most automatic techniques are computationally intensive and unable to repair large holes. To tackle these challenges, a computationally-efficient algorithm that separately inpaints foreground objects and background is pr ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
(Show Context)
Video inpainting is the process of repairing missing regions (holes) in videos. Most automatic techniques are computationally intensive and unable to repair large holes. To tackle these challenges, a computationally-efficient algorithm that separately inpaints foreground objects and background is proposed. Using Dynamic Programming, foreground objects are holistically inpainted with object templates that minimizes a sliding-window dissimilarity cost function. Static background are inpainted by adaptive background replacement and image inpainting.
How Not to Be Seen – Object Removal from Videos of Crowded Scenes
"... Figure 1: To remove the foremost person from this video, both the dynamic scene elements and the background behind it need to be restored. In this sample from our Museum sequence, the right-hand-side of each frame pair shows the inpainted result. Removing dynamic objects from videos is an extremely ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Figure 1: To remove the foremost person from this video, both the dynamic scene elements and the background behind it need to be restored. In this sample from our Museum sequence, the right-hand-side of each frame pair shows the inpainted result. Removing dynamic objects from videos is an extremely challenging problem that even visual effects professionals often solve with time-consuming manual frame-by-frame editing. We propose a new approach to video completion that can deal with complex scenes containing dynamic background and non-periodical moving objects. We build upon the idea that the spatio-temporal hole left by a removed object can be filled with data available on other regions of the video where the occluded objects were visible. Video completion is performed by solving a large combinatorial problem that searches for an optimal pattern of pixel offsets from occluded to unoccluded regions. Our contribution includes an energy functional that generalizes well over different scenes with stable parameters, and that has the desirable convergence properties for a graph-cut-based optimization. We provide an interface to guide the completion process that both reduces computation time and allows for efficient correction of small errors in the result. We demonstrate that our approach can effectively complete complex, high-resolution occlusions that are greater in difficulty than what existing methods have shown.
EXCOL: An EXtract-and-COmplete Layering Approach to Cartoon Animation Reusing
"... Abstract—We introduce the EXCOL method (EXtract-and-COmplete Layering) — a novel cartoon animation processing technique to convert a traditional animated cartoon video into multiple semantically meaningful layers. Our technique is inspired by vision-based layering techniques but focuses on shape cu ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract—We introduce the EXCOL method (EXtract-and-COmplete Layering) — a novel cartoon animation processing technique to convert a traditional animated cartoon video into multiple semantically meaningful layers. Our technique is inspired by vision-based layering techniques but focuses on shape cues in both the extraction and completion steps to reflect the unique characteristics of cartoon animation. For layer extraction, we define a novel similarity measure incorporating both shape and color of automatically segmented regions within individual frames and propagate a small set of user-specified layer labels among similar regions across frames. By clustering regions with the same labels, each frame is appropriately partitioned into different layers, with each layer containing semantically meaningful content. Then a warping-based approach is used to fill missing parts caused by occlusion within the extracted layers to achieve a complete representation. EXCOL provides a flexible way to effectively reuse traditional cartoon animations with only a small amount of user interaction. It is demonstrated that our EXCOL method is effective and robust, and the layered representation benefits a variety of applications in cartoon animation processing. Index Terms—cartoon animation, layer extraction, layer completion, label propagation 1