Results 1  10
of
23
Realtime texture synthesis by patchbased sampling
 ACM Transactions on Graphics
, 2001
"... We present a patchbased sampling algorithm for synthesizing textures from an input sample texture. The patchbased sampling algorithm is fast. Using patches of the sample texture as building blocks for texture synthesis, this algorithm makes highquality texture synthesis a realtime process. For g ..."
Abstract

Cited by 170 (12 self)
 Add to MetaCart
(Show Context)
We present a patchbased sampling algorithm for synthesizing textures from an input sample texture. The patchbased sampling algorithm is fast. Using patches of the sample texture as building blocks for texture synthesis, this algorithm makes highquality texture synthesis a realtime process. For generating textures of the same size and comparable (or better) quality, patchbased sampling is orders of magnitude faster than existing texture synthesis algorithms. The patchbased sampling algorithm synthesizes highquality textures for a wide variety of textures ranging from regular to stochastic. By sampling patches according to a nonparametric estimation of the local conditional MRF density, we avoid mismatching features across patch boundaries. Moreover, the patchbased sampling algorithm remains effective when pixelbased nonparametric sampling algorithms fail to produce good results. For natural textures, the results of the patchbased sampling look subjectively better.
Texture Classification Using Spectral Histograms
, 2000
"... Based on a local spatial/frequency representation, we propose a spectral histogram as a feature statistic for characterizing texture appearance. The spectral histogram consists of marginal distributions of responses of a bank of filters and encodes implicitly the structure of images. The distance b ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
Based on a local spatial/frequency representation, we propose a spectral histogram as a feature statistic for characterizing texture appearance. The spectral histogram consists of marginal distributions of responses of a bank of filters and encodes implicitly the structure of images. The distance between two spectral histograms is measured using 2 statistic. The spectral histogram with the associated distance measure exhibits several properties that are necessary for texture discrimination and classification. The spectral histogram provides a generic feature for texture as well as nontexture images, where the uniform image is a special case with a unique pattern. The spectral histogram is a nonlinear operator, consistent with the nonlinearity in human perception. Our classification experiments reveal that it can generalize well even with a small number of training samples and the classification result does not depend on a particular form of distance measure. We have obtained very g...
Binary Partitioning, Perceptual Grouping, and Restoration with Semidefinite Programming
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2003
"... We introduce a novel optimization method based on semidefinite programming relaxations to the field of computer vision and apply it to the combinatorial problem of minimizing quadratic functionals in binary decision variables subject to linear constraints. ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
(Show Context)
We introduce a novel optimization method based on semidefinite programming relaxations to the field of computer vision and apply it to the combinatorial problem of minimizing quadratic functionals in binary decision variables subject to linear constraints.
Natural Image Statistics for Natural Image Segmentation
 International Journal of Computer Vision
, 2003
"... Building on recent progress in modeling filter response statistics of natural images we integrate a statistical model into a variational framework for image segmentation. Incorporated in a sound probabilistic distance measure the model drives level sets toward meaningful segmentations of complex tex ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
(Show Context)
Building on recent progress in modeling filter response statistics of natural images we integrate a statistical model into a variational framework for image segmentation. Incorporated in a sound probabilistic distance measure the model drives level sets toward meaningful segmentations of complex textures and natural scenes. Since each region comprises two model parameters only the approach is computationally efficient and enables the application of variational segmentation to a considerably larger class of realworld images. We validate the statistical basis of our approach on thousands of natural images and demonstrate that our model outperforms recent variational segmentation methods based on secondorder statistics.
From information scaling of natural images to regimes of statistical models
 Quarterly of Applied Math
, 2008
"... 1 Computer vision can be considered a highly specialized data collection and data analysis problem. We need to understand the special properties of image data in order to construct statistical models for representing the wide variety of image patterns. One special property of vision that distinguish ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
(Show Context)
1 Computer vision can be considered a highly specialized data collection and data analysis problem. We need to understand the special properties of image data in order to construct statistical models for representing the wide variety of image patterns. One special property of vision that distinguishes itself from other sensory data such as speech data is that distance or scale plays a profound role in image data. More specifically, visual objects and patterns can appear at a wide range of distances or scales, and the same visual pattern appearing at different distances or scales produces different image data with different statistical properties, thus entails different regimes of statistical models. In particular, we show that the entropy rate of the image data changes over the viewing distance (as well as the camera resolution). Moreover, the inferential uncertainty changes with viewing distance too. We call these changes information scaling. From this perspective, we examine both empirically and theoretically two prominent and yet largely isolated research themes in image modeling literature, namely, wavelet sparse coding and Markov random fields. Our results indicate that the two models are appropriate on two different entropy regimes: sparse coding targets the
Primal sketch: Integrating structure and texture
 Computer Vision and Image Understanding
, 2007
"... This article proposes a generative image model, which we call “primal sketch, ” following Marr’s insight and terminology. This model combines two prominent classes of generative models, namely, sparse coding model and Markov random field model, for representing geometric structures and stochastic te ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
(Show Context)
This article proposes a generative image model, which we call “primal sketch, ” following Marr’s insight and terminology. This model combines two prominent classes of generative models, namely, sparse coding model and Markov random field model, for representing geometric structures and stochastic textures respectively. Specifically, the image lattice is divided into structure domain and texture domain. The sparse coding model is used to represent image intensities on the structure domain, where edge and ridge segments are modeled by image coding functions with explicit geometric and photometric parameters. The edge and ridge segments form a sketch graph, which is governed by a simple spatial prior model. The Markov random field model is used to summarize image intensities on the texture domain, where the texture patterns are characterized by feature statistics in the form of marginal histograms of responses from a set of linear filters. The Markov random fields inpaint the texture domain while interpolating the structure domain seamlessly. We propose a sketch pursuit algorithm for model fitting. We show a number of experiments on real images to demonstrate the model and the algorithm.
Texture Synthesis and NonParametric Resampling of Random Fields
"... This paper introduces a nonparametric algorithm for bootstrapping a stationary random field and proves certain consistency properties of the algorithm for the case of mixing random fields. The motivation for this paper comes from relating a heuristic texture synthesis algorithm popular in computer ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
This paper introduces a nonparametric algorithm for bootstrapping a stationary random field and proves certain consistency properties of the algorithm for the case of mixing random fields. The motivation for this paper comes from relating a heuristic texture synthesis algorithm popular in computer vision to general nonparametric bootstrap of stationary random fields. We give a formal resampling scheme for the heuristic texture algorithm and prove that it produces a consistent estimate of the joint distribution of pixels in a window of certain size under mixing and regularity conditions on the random field. The joint distribution of pixels is the quantity of interest here because theories of human perception of texture suggest that two textures with the same joint distribution of pixel values in a suitably chosen window will appear similar to a human. Thus we provide theoretical justification for an algorithm that has already been very successful in practice, and suggest an explanation for its perceptually good results. AMS 2000 subject classifications. Primary 62M40, Secondary 62G09. Key words and phrases. Bootstrap, Markov random fields, Markov mesh models,
Asymptotically Admissible Texture Synthesis
 In International Workshop on Statistical and Computational Theories of Vision
, 2001
"... Recently there is a resurgent interest in example based texture analysis and synthesis in both computer vision and computer graphics. While study in computer vision is concerned with learning accurate texture models, research in graphics is aimed at effective algorithms for texture synthesis wit ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
Recently there is a resurgent interest in example based texture analysis and synthesis in both computer vision and computer graphics. While study in computer vision is concerned with learning accurate texture models, research in graphics is aimed at effective algorithms for texture synthesis without necessarily obtaining explicit texture model. This paper makes three contributions to this recent excitement. First, we introduce a theoretical framework for designing and analyzing texture sampling algorithms. This framework, built upon the mathematical definition of textures, measures a texture sampling algorithm using admissibility, effectiveness, and sampling speed. Second, we compare and analyze texture sampling algorithms based on admissibility and effectiveness. In particular, we propose different design criteria for texture analysis algorithms in computer vision and texture synthesis algorithms in computer graphics. Finally, we develop a novel texture synthesis algorithm which samples from a subset of the Julesz ensemble by pasting texture patches from the sample texture. A key feature of our algorithm is that it can synthesize highquality textures extremely fast. On a midlevel PC we can synthesize a 512 # 512 texture from a 64 # 64 sample in just 0.03 second. This algorithm has been tested through extensive experiments and we report sample results from our experiments. 1 1
ContentBased Image Categorization and Retrieval using Neural Networks
 in IEEE International Conference on Multimedia and Expo, Beijing
, 2007
"... We propose a neural network based method for organizing images for contentbased image retrieval. We use spectral histogram features, the histograms of filtered images to capture the spatial relationship among pixels as well as global appearance of images. We then find the optimal combination of sp ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We propose a neural network based method for organizing images for contentbased image retrieval. We use spectral histogram features, the histograms of filtered images to capture the spatial relationship among pixels as well as global appearance of images. We then find the optimal combination of spectral histogram features using optimal factor analysis to reduce the dimension of features and maximize the discrimination. The reduced features are then used as input to a multiple layer perceptron, which is trained to categorize images based on content using back propagation. For a query image, images are retrieved from different classes based on the categorization probability for the query image. Experimental results on a subset of Corel dataset demonstrate the effectiveness of the proposed method and comparisons show that the proposed method gives significant improvement over other methods. 1.
Chapter A Survey of ManifoldBased Learning Methods
"... Abstract: We review the ideas, algorithms, and numerical performance of manifoldbased machine learning and dimension reduction methods. The representative methods include locally linear embedding (LLE), ISOMAP, Laplacian eigenmaps, Hessian eigenmaps, local tangent space alignment (LTSA), and charti ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: We review the ideas, algorithms, and numerical performance of manifoldbased machine learning and dimension reduction methods. The representative methods include locally linear embedding (LLE), ISOMAP, Laplacian eigenmaps, Hessian eigenmaps, local tangent space alignment (LTSA), and charting. We describe the insights from these developments, as well as new opportunities for both researchers and practitioners. Potential applications in image and sensor data are illustrated. This chapter is based on an invited survey presentation that was delivered by Huo at the 2004 INFORMS Annual Meeting, which