Results 1 - 10
of
153
Fast texture synthesis using tree-structured vector quantization
, 2000
"... Figure 1: Our texture generation process takes an example texture patch (left) and a random noise (middle) as input, and modifies this random noise to make it look like the given example texture. The synthesized texture (right) can be of arbitrary size, and is perceived as very similar to the given ..."
Abstract
-
Cited by 561 (12 self)
- Add to MetaCart
(Show Context)
Figure 1: Our texture generation process takes an example texture patch (left) and a random noise (middle) as input, and modifies this random noise to make it look like the given example texture. The synthesized texture (right) can be of arbitrary size, and is perceived as very similar to the given example. Using our algorithm, textures can be generated within seconds, and the synthesized results are always tileable. Texture synthesis is important for many applications in computer graphics, vision, and image processing. However, it remains difficult to design an algorithm that is both efficient and capable of generating high quality results. In this paper, we present an efficient algorithm for realistic texture synthesis. The algorithm is easy to use and requires only a sample texture as input. It generates textures with perceived quality equal to or better than those produced by previous techniques, but runs two orders of magnitude faster. This permits us to apply texture synthesis to problems where it has traditionally been considered impractical. In particular, we have applied it to constrained synthesis for image editing and temporal texture generation. Our algorithm is derived from Markov Random Field texture models and generates textures through a deterministic searching process. We accelerate this synthesis process using tree-structured vector quantization.
Searching in metric spaces
, 2001
"... The problem of searching the elements of a set that are close to a given query element under some similarity criterion has a vast number of applications in many branches of computer science, from pattern recognition to textual and multimedia information retrieval. We are interested in the rather gen ..."
Abstract
-
Cited by 436 (38 self)
- Add to MetaCart
The problem of searching the elements of a set that are close to a given query element under some similarity criterion has a vast number of applications in many branches of computer science, from pattern recognition to textual and multimedia information retrieval. We are interested in the rather general case where the similarity criterion defines a metric space, instead of the more restricted case of a vector space. Many solutions have been proposed in different areas, in many cases without cross-knowledge. Because of this, the same ideas have been reconceived several times, and very different presentations have been given for the same approaches. We present some basic results that explain the intrinsic difficulty of the search problem. This includes a quantitative definition of the elusive concept of “intrinsic dimensionality. ” We also present a unified
When Is "Nearest Neighbor" Meaningful?
- In Int. Conf. on Database Theory
, 1999
"... . We explore the effect of dimensionality on the "nearest neighbor " problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance ..."
Abstract
-
Cited by 408 (2 self)
- Add to MetaCart
. We explore the effect of dimensionality on the "nearest neighbor " problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple...
Example-based super-resolution
- IEEE COMPUT. GRAPH. APPL
, 2001
"... The Problem: Pixel representations for images do not have resolution independence. When we zoom into a bitmapped image, we get a blurred image. Figure 1 shows the problem for a teapot image, rich with real-world detail. We know the teapot’s features should remain sharp as we zoom in on them, yet sta ..."
Abstract
-
Cited by 349 (5 self)
- Add to MetaCart
(Show Context)
The Problem: Pixel representations for images do not have resolution independence. When we zoom into a bitmapped image, we get a blurred image. Figure 1 shows the problem for a teapot image, rich with real-world detail. We know the teapot’s features should remain sharp as we zoom in on them, yet standard pixel interpolation methods, such as pixel replication (b, c) and cubic spline interpolation (d, e), introduce artifacts or blurring of edges. For images zoomed 3 octaves, such as these, sharpening the interpolated result has little useful effect (f, g). Many applications in graphics or image processing could benefit from such pixel resolution independence, such as texture mapping, enlarging consumer photographs, and converting NTSC video content to HDTV. We don’t expect perfect resolution independence—even the polygon representation doesn’t have that—but increasing the resolution independence of pixel-based representations is an important task for image-based rendering. Our example-based super-resolution algorithm yields Fig. 1 (h, i). Previous Work: Researchers have long studied image interpolation, although only recently using machine learning or sampling approaches, which offer much power. Cubic spline interpolation [5] is a very common image interpolation function, but suffers from blurring of edges and image details. Recent attempts to improve on cubic spline interpolation [6, 8, 2] have met with limited success. Schreiber and collaborators [6] proposed a sharpened Gaussian interpolator function to minimize information
Implicit Probabilistic Models of Human Motion for Synthesis and Tracking Hedvig Sidenblen
- In European Conference on Computer Vision
, 2002
"... This paper addresses the problem of probabilistically modeling 3D human motion for synthesis and tracking. Given the high dimensional nature of human motion, learning an explicit probabilistic model from available training data is currently impractical. Instead we exploit methods from texture synthe ..."
Abstract
-
Cited by 201 (4 self)
- Add to MetaCart
(Show Context)
This paper addresses the problem of probabilistically modeling 3D human motion for synthesis and tracking. Given the high dimensional nature of human motion, learning an explicit probabilistic model from available training data is currently impractical. Instead we exploit methods from texture synthesis that treat images as representing an implicit empirical distribution . These methods replace the problem of representing the probability of a texture pattern with that of searching the training data for similar instances of that pattern. We extend this idea to temporal data representing 3D human motion with a large database of example motions. To make the method useful in practice, we must address the problem of efficient search in a large training set
A survey of free-form object representation and recognition techniques
- Computer Vision and Image Understanding
, 2001
"... Advances in computer speed, memory capacity, and hardware graphics acceleration have made the interactive manipulation and visualization of complex, detailed (and therefore large) three-dimensional models feasible. These models are either painstakingly designed through an elaborate CAD process or re ..."
Abstract
-
Cited by 200 (1 self)
- Add to MetaCart
(Show Context)
Advances in computer speed, memory capacity, and hardware graphics acceleration have made the interactive manipulation and visualization of complex, detailed (and therefore large) three-dimensional models feasible. These models are either painstakingly designed through an elaborate CAD process or reverse engineered from sculpted prototypes using modern scanning technologies and integration methods. The availability of detailed data describing the shape of an object offers the computer vision practitioner new ways to recognize and localize free-form objects. This survey reviews recent literature on both the 3D model building process and techniques used to match and identify free-form objects from imagery. c ○ 2001 Academic Press 1.
Index-driven similarity search in metric spaces
- ACM Transactions on Database Systems
, 2003
"... Similarity search is a very important operation in multimedia databases and other database applications involving complex objects, and involves finding objects in a data set S similar to a query object q, based on some similarity measure. In this article, we focus on methods for similarity search th ..."
Abstract
-
Cited by 192 (8 self)
- Add to MetaCart
Similarity search is a very important operation in multimedia databases and other database applications involving complex objects, and involves finding objects in a data set S similar to a query object q, based on some similarity measure. In this article, we focus on methods for similarity search that make the general assumption that similarity is represented with a distance metric d. Existing methods for handling similarity search in this setting typically fall into one of two classes. The first directly indexes the objects based on distances (distance-based indexing), while the second is based on mapping to a vector space (mapping-based approach). The main part of this article is dedicated to a survey of distance-based indexing methods, but we also briefly outline how search occurs in mapping-based methods. We also present a general framework for performing search based on distances, and present algorithms for common types of queries that operate on an arbitrary “search hierarchy. ” These algorithms can be applied on each of the methods presented, provided a suitable search hierarchy is defined.
Real-time texture synthesis by patch-based sampling
- ACM Transactions on Graphics
, 2001
"... We present a patch-based sampling algorithm for synthesizing textures from an input sample texture. The patch-based sampling algorithm is fast. Using patches of the sample texture as building blocks for texture synthesis, this algorithm makes high-quality texture synthesis a real-time process. For g ..."
Abstract
-
Cited by 173 (12 self)
- Add to MetaCart
We present a patch-based sampling algorithm for synthesizing textures from an input sample texture. The patch-based sampling algorithm is fast. Using patches of the sample texture as building blocks for texture synthesis, this algorithm makes high-quality texture synthesis a real-time process. For generating textures of the same size and comparable (or better) quality, patch-based sampling is orders of magnitude faster than existing texture synthesis algorithms. The patch-based sampling algorithm synthesizes high-quality textures for a wide variety of textures ranging from regular to stochastic. By sampling patches according to a non-parametric estimation of the local conditional MRF density, we avoid mismatching features across patch boundaries. Moreover, the patch-based sampling algorithm remains effective when pixel-based non-parametric sampling algorithms fail to produce good results. For natural textures, the results of the patch-based sampling look subjectively better.
Mean Shift Based Clustering in High Dimensions: A Texture Classification Example
, 2003
"... Feature space analysis is the main module in many computer vision tasks. The most popular technique, k-means clustering, however, has two inherent limitations: the clusters are constrained to be spherically symmetric and their number has to be known a priori. In nonparametric clustering methods, lik ..."
Abstract
-
Cited by 137 (3 self)
- Add to MetaCart
(Show Context)
Feature space analysis is the main module in many computer vision tasks. The most popular technique, k-means clustering, however, has two inherent limitations: the clusters are constrained to be spherically symmetric and their number has to be known a priori. In nonparametric clustering methods, like the one based on mean shift, these limitations are eliminated but the amount of computation becomes prohibitively large as the dimension of the space increases. We exploit a recently proposed approximation technique, locality-sensitive hashing (LSH), to reduce the computational complexity of adaptive mean shift. In our implementation of LSH the optimal parameters of the data structure are determined by a pilot learning procedure, and the partitions are data driven. As an application, the performance of mode and k-means based textons are compared in a texture classification study.