Results 1  10
of
15
Fast texture synthesis using treestructured vector quantization
, 2000
"... Figure 1: Our texture generation process takes an example texture patch (left) and a random noise (middle) as input, and modifies this random noise to make it look like the given example texture. The synthesized texture (right) can be of arbitrary size, and is perceived as very similar to the given ..."
Abstract

Cited by 562 (12 self)
 Add to MetaCart
(Show Context)
Figure 1: Our texture generation process takes an example texture patch (left) and a random noise (middle) as input, and modifies this random noise to make it look like the given example texture. The synthesized texture (right) can be of arbitrary size, and is perceived as very similar to the given example. Using our algorithm, textures can be generated within seconds, and the synthesized results are always tileable. Texture synthesis is important for many applications in computer graphics, vision, and image processing. However, it remains difficult to design an algorithm that is both efficient and capable of generating high quality results. In this paper, we present an efficient algorithm for realistic texture synthesis. The algorithm is easy to use and requires only a sample texture as input. It generates textures with perceived quality equal to or better than those produced by previous techniques, but runs two orders of magnitude faster. This permits us to apply texture synthesis to problems where it has traditionally been considered impractical. In particular, we have applied it to constrained synthesis for image editing and temporal texture generation. Our algorithm is derived from Markov Random Field texture models and generates textures through a deterministic searching process. We accelerate this synthesis process using treestructured vector quantization.
Maximum Conditional Likelihood via Bound Maximization and the CEM Algorithm
 IN ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 11
, 1998
"... We present the CEM (Conditional Expectation Maximization) algorithm as an extension of the EM (Expectation Maximization) algorithm to conditional density estimation under missing data. A bounding and maximization process is given to specifically optimize conditional likelihood instead of the usual j ..."
Abstract

Cited by 61 (7 self)
 Add to MetaCart
We present the CEM (Conditional Expectation Maximization) algorithm as an extension of the EM (Expectation Maximization) algorithm to conditional density estimation under missing data. A bounding and maximization process is given to specifically optimize conditional likelihood instead of the usual joint likelihood. Weapply the method to conditioned mixture models and use bounding techniques to derive the model's update rules. Monotonic convergence, computational efficiency and regression results superior to EM are demonstrated.
Action Reaction Learning: Automatic Visual Analysis and Synthesis of Interactive Behaviour
 in Proc. International Conference on Vision Systems
, 1999
"... We propose ActionReaction Learning as an approach for analyzing and synthesizing human behaviour. This paradigm uncovers causal mappings between past and future events or between an action and its reaction by observing time sequences. We apply this method to analyze human interaction and to subs ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
We propose ActionReaction Learning as an approach for analyzing and synthesizing human behaviour. This paradigm uncovers causal mappings between past and future events or between an action and its reaction by observing time sequences. We apply this method to analyze human interaction and to subsequently synthesize human behaviour. Using a time series of perceptual measurements, a system automatically uncovers correlations between past gestures from one human participant (an action) and a subsequent gesture(areaction) from another participant. A probabilistic model is trainedfrom data of the human interaction using a novel estimation technique, Conditional Expectation Maximization (CEM). The estimation uses general bounding and maximization to monotonically find the maximum conditional likelihood solution. The learning system drives a graphical interactive character which probabilistically predicts a likely response to a user's behaviour and performs it interactively. Thus, after analyzing human interaction in a pair of participants, the system is able to replace one of them and interact with a single remaining user. 1
Texture Modeling and Synthesis using Joint Statistics of Complex Wavelet Coefficients
 IN IEEE WORKSHOP ON STATISTICAL AND COMPUTATIONAL THEORIES OF VISION, FORT COLLINS
, 1999
"... We present a statistical characterization of texture images in the context of an overcomplete complex wavelet transform. The characterization is based on empirical observations of statistical regularities in such images, and parameterized by (1) the local autocorrelation of the coefficients in each ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
(Show Context)
We present a statistical characterization of texture images in the context of an overcomplete complex wavelet transform. The characterization is based on empirical observations of statistical regularities in such images, and parameterized by (1) the local autocorrelation of the coefficients in each subband; (2) both the local autocorrelation and crosscorrelation of coefficient magnitudes at other orientations and spatial scales; and (3) the first few moments of the image pixel histogram. We develop an efficient algorithm for synthesizing random images subject to these constraints using alternated projections, and demonstrate its effectiveness on a wide range of synthetic and natural textures. In particular, we show that many important structural elements in textures (e.g., edges, repeated patterns or alternated patches of simpler texture), can be captured through joint second order statistics of the coefficient magnitudes. We also show the flexibility of the representation, by applying to a variety...
Texture Representation and Synthesis Using Correlation of Complex Wavelet Coefficient Magnitudes
 Tech. Rep. 54, Consejo Superior de Investigaciones Cientificas (CSIC
, 1999
"... We present a statistical characterization of texture images in the context of an overcomplete complex wavelet transform. The characterization is based on empirical observations of statistical regularities in such images, and parameterized by (1) the local autocorrelation of the coefficients in each ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
We present a statistical characterization of texture images in the context of an overcomplete complex wavelet transform. The characterization is based on empirical observations of statistical regularities in such images, and parameterized by (1) the local autocorrelation of the coefficients in each subband; (2) both the local autocorrelation and crosscorrelation of coefficient magnitudes at other orientations and spatial scales; and (3) the first few moments of the image pixel histogram. We develop an efficient algorithm for synthesizing random images subject to these constraints using alternated projections, and demonstrate its effectiveness on a wide range of synthetic and natural textures. We also show the flexibility of the representation, by applying to a variety of tasks which can be viewed as constrained image synthesis problems. Vision is arguably our most important sensory system, judging from both the ubiquity of visual forms of communication, and the large proportion of ...
Action Reaction Learning: Analysis and Synthesis of Human Behaviour
 IEEE WORKSHOP ON THE INTERPRETATION OF VISUAL MOTION
, 1998
"... We propose ActionReaction Learning as an approach for analyzing and synthesizing human behaviour. This paradigm uncovers causal mappings between past and future events or between an action and its reaction by observing time sequences. We apply this methodto analyze human interaction and to subsequ ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
We propose ActionReaction Learning as an approach for analyzing and synthesizing human behaviour. This paradigm uncovers causal mappings between past and future events or between an action and its reaction by observing time sequences. We apply this methodto analyze human interaction and to subsequently synthesize human behaviour. Using a time series of perceptual measurements, a system automatically uncovers a mapping between gestures from one human participant (an action) and a subsequent gesture(areaction) from another participant. Aprobabilistic model is trained from data of the human interaction using a novel estimation technique, Conditional Expectation Maximization (CEM). The system drives a graphical interactive character which probabilistically predicts the most likely response to the user's behaviour and performs it interactively. Thus, after analyzing human interaction in a pair of participants, the system is able to replaceoneof them and interact with a single remaining user.
Lossy Compression of Grayscale Document Images by AdaptiveOffset Quantization
, 2001
"... This paper describes an adaptiveoffset quantization scheme and considers its application to the lossy compression of grayscale document images. The technique involves scalarquantizing and entropycoding pixels sequentially, such that the quantizer's offset is always chosen to minimize the exp ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper describes an adaptiveoffset quantization scheme and considers its application to the lossy compression of grayscale document images. The technique involves scalarquantizing and entropycoding pixels sequentially, such that the quantizer's offset is always chosen to minimize the expected number of bits emitted for each pixel, where the expectation is based on the predictive distribution used for entropy coding. To accomplish this, information is fed back from the entropy coder's statistical modeling unit to the quantizer. This feedback path is absent in traditional compression schemes. Encouraging but preliminary experimental results are presented comparing the technique with JPEG and with fixedoffset quantization on a scanned grayscale text image. Keywords: document image compression, quantization, entropy coding, arithmetic coding [To appear in Proceedings of IS&T/SPIE Electronic Imaging 2001: Document Recognition and Retrieval VIII January 2001.] 1. INTRODUCTION Gra...
TwoStage Lossy/Lossless Compression Of Grayscale Document Images
 Proceedings of the Fifth International Symposium on Mathematical Morphology
, 2000
"... . This paper describes a twostage method of document image compression wherein a grayscale document image is rst processed to improve its compressibility, then losslessly compressed. The initial processing involves hierarchical, coarsetone morphological operations designed to combat the noiselike ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
. This paper describes a twostage method of document image compression wherein a grayscale document image is rst processed to improve its compressibility, then losslessly compressed. The initial processing involves hierarchical, coarsetone morphological operations designed to combat the noiselike variability of the loworder bits while attempting to preserve or even improve intelligibility. The result of this stage is losslessly compressed by an arithmetic coder that uses a mixture model to derive contextconditional graylevel probabilities. The lossless stage is compared experimentally with several reference methods, and is found to be competitive at all rates. The overall system is found to be comparable with JPEG in terms of meansquare error performance, but appears to outperform JPEG in terms of subjectively judged document image intelligibility. Key words: document image compression, image morphology, arithmetic coding, multiresolution, Gaussian mixtures [Appears in Mathem...