Results 1 - 10
of
561
Image Quilting for Texture Synthesis and Transfer
, 2001
"... We present a simple image-based method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces s ..."
Abstract
-
Cited by 699 (18 self)
- Add to MetaCart
We present a simple image-based method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures. Second, we extend the algorithm to perform texture transfer -- rendering an object with a texture taken from a different object. More generally, we demonstrate how an image can be re-rendered in the style of a different image. The method works directly on the images and does not require 3D information.
Image analogies
, 2001
"... Figure 1 An image analogy. Our problem is to compute a new “analogous ” image B ′ that relates to B in “the same way ” as A ′ relates to A. Here, A, A ′ , and B are inputs to our algorithm, and B ′ is the output. The full-size images are shown in Figures 10 and 11. This paper describes a new framewo ..."
Abstract
-
Cited by 455 (8 self)
- Add to MetaCart
(Show Context)
Figure 1 An image analogy. Our problem is to compute a new “analogous ” image B ′ that relates to B in “the same way ” as A ′ relates to A. Here, A, A ′ , and B are inputs to our algorithm, and B ′ is the output. The full-size images are shown in Figures 10 and 11. This paper describes a new framework for processing images by example, called “image analogies. ” The framework involves two stages: a design phase, in which a pair of images, with one image purported to be a “filtered ” version of the other, is presented as “training data”; and an application phase, in which the learned filter is applied to some new target image in order to create an “analogous” filtered result. Image analogies are based on a simple multiscale autoregression, inspired primarily by recent results in texture synthesis. By choosing different types of source image pairs as input, the framework supports a wide variety of “image filter ” effects, including traditional image filters, such as blurring or embossing; improved texture synthesis, in which some textures are synthesized with higher quality than by previous approaches; super-resolution, in which a higher-resolution image is inferred from a low-resolution source; texture transfer, in which images are “texturized ” with some arbitrary source texture; artistic filters, in which various drawing and painting styles are synthesized based on scanned real-world examples; and texture-by-numbers, in which realistic scenes, composed of a variety of textures, are created using a simple painting interface.
Dynamic Textures
, 2002
"... Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include sea-waves, smoke, foliage, whirlwind etc. We present a novel characterization of dynamic textures that poses the problems of modeling, learning, recognizing and synthesizing ..."
Abstract
-
Cited by 377 (18 self)
- Add to MetaCart
(Show Context)
Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include sea-waves, smoke, foliage, whirlwind etc. We present a novel characterization of dynamic textures that poses the problems of modeling, learning, recognizing and synthesizing dynamic textures on a firm analytical footing. We borrow tools from system identification to capture the "essence" of dynamic textures; we do so by learning (i.e. identifying) models that are optimal in the sense of maximum likelihood or minimum prediction error variance. For the special case of second-order stationary processes, we identify the model sub-optimally in closed-form. Once learned, a model has predictive power and can be used for extrapolating synthetic sequences to infinite length with negligible computational cost. We present experimental evidence that, within our framework, even low-dimensional models can capture very complex visual phenomena.
Region Filling and Object Removal by Exemplar-Based Image Inpainting
, 2004
"... A new algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way. In the past, this problem has been addressed by two classes of algorithms: 1) “texture synthesis” algorithms for generating large image re ..."
Abstract
-
Cited by 365 (1 self)
- Add to MetaCart
A new algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way. In the past, this problem has been addressed by two classes of algorithms: 1) “texture synthesis” algorithms for generating large image regions from sample textures and 2) “inpainting ” techniques for filling in small image gaps. The former has been demonstrated for “textures”—repeating twodimensional patterns with some stochasticity; the latter focus on linear “structures ” which can be thought of as one-dimensional patterns, such as lines and object contours. This paper presents a novel and efficient algorithm that combines the advantages of these two approaches. We first note that exemplar-based texture synthesis contains the essential process required to replicate both texture and structure; the success of structure propagation, however, is highly dependent on the order in which the filling proceeds. We propose a best-first algorithm in which the confidence in the synthesized pixel values is propagated in a manner similar to the propagation of information in inpainting. The actual color values are computed using exemplar-based synthesis. In this paper, the simultaneous propagation of texture and structure information is achieved by a single, efficient algorithm. Computational efficiency is achieved by a block-based sampling process. A number of examples on real and synthetic images demonstrate the effectiveness of our algorithm in removing large occluding objects, as well as thin scratches. Robustness with respect to the shape of the manually selected target region is also demonstrated. Our results compare favorably to those obtained by existing techniques.
Synthesizing Natural Textures
- In ACM Symposium on Interactive 3D Graphics
"... We present a simple texture synthesis algorithm that is well-suited for a specific class of naturally occurring textures. This class includes quasi-repeating patterns consisting of small objects of familiar but irregular size, such as flower fields, pebbles, forest undergrowth, bushes and tree branc ..."
Abstract
-
Cited by 312 (1 self)
- Add to MetaCart
(Show Context)
We present a simple texture synthesis algorithm that is well-suited for a specific class of naturally occurring textures. This class includes quasi-repeating patterns consisting of small objects of familiar but irregular size, such as flower fields, pebbles, forest undergrowth, bushes and tree branches. The algorithm starts from a sample image and generates a new image of arbitrary size the appearance of which is similar to that of the original image. This new image does not change the basic spatial frequencies the original image; instead it creates an image that is a visually similar, and is of a size set by the user. This method is fast and its implementation is straightforward. We extend the algorithm to allow direct user input for interactive control over the texture synthesis process. This allows the user to indicate large-scale properties of the texture appearance using a standard painting-style interface, and to choose among various candidate textures the algorithm can create by performing different number of iterations.
PatchMatch: A Randomized Correspondence Algorithm for . . .
, 2009
"... This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image ..."
Abstract
-
Cited by 243 (9 self)
- Add to MetaCart
This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools – image retargeting, completion and reshuffling – that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.
Simultaneous Structure and Texture Image Inpainting
, 2003
"... An algorithm for the simultaneous filling-in of texture and structure in regions of missing image information is presented in this paper. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these function ..."
Abstract
-
Cited by 222 (13 self)
- Add to MetaCart
(Show Context)
An algorithm for the simultaneous filling-in of texture and structure in regions of missing image information is presented in this paper. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these functions separately with structure and texture filling-in algorithms. The first function used in the decomposition is of bounded variation, representing the underlying image structure, while the second function captures the texture and possible noise. The region of missing information in the bounded variation image is reconstructed using image inpainting algorithms, while the same region in the texture image is filled-in with texture synthesis techniques. The original image is then reconstructed adding back these two sub-images. The novel contribution of this paper is then in the combination of these three previously developed components, image decomposition with inpainting and texture synthesis, which permits the simultaneous use of filling-in algorithms that are suited for different image characteristics. Examples on real images show the advantages of this proposed approach.
Implicit Probabilistic Models of Human Motion for Synthesis and Tracking Hedvig Sidenblen
- In European Conference on Computer Vision
, 2002
"... This paper addresses the problem of probabilistically modeling 3D human motion for synthesis and tracking. Given the high dimensional nature of human motion, learning an explicit probabilistic model from available training data is currently impractical. Instead we exploit methods from texture synthe ..."
Abstract
-
Cited by 201 (4 self)
- Add to MetaCart
(Show Context)
This paper addresses the problem of probabilistically modeling 3D human motion for synthesis and tracking. Given the high dimensional nature of human motion, learning an explicit probabilistic model from available training data is currently impractical. Instead we exploit methods from texture synthesis that treat images as representing an implicit empirical distribution . These methods replace the problem of representing the probability of a texture pattern with that of searching the training data for similar instances of that pattern. We extend this idea to temporal data representing 3D human motion with a large database of example motions. To make the method useful in practice, we must address the problem of efficient search in a large training set
Wang Tiles for Image and Texture Generation
, 2003
"... We present a simple stochastic system for non-periodically tiling the plane with a small set of Wang Tiles. The tiles may be filled with texture, patterns, or geometry that when assembled create a continuous representation. The primary advantage of using Wang Tiles is that once the tiles are filled, ..."
Abstract
-
Cited by 187 (4 self)
- Add to MetaCart
(Show Context)
We present a simple stochastic system for non-periodically tiling the plane with a small set of Wang Tiles. The tiles may be filled with texture, patterns, or geometry that when assembled create a continuous representation. The primary advantage of using Wang Tiles is that once the tiles are filled, large expanses of non-periodic texture (or patterns or geometry) can be created as needed very efficiently at runtime. Wang Tiles