Results 1  10
of
1,385,235
Fast texture synthesis using treestructured vector quantization
, 2000
"... Figure 1: Our texture generation process takes an example texture patch (left) and a random noise (middle) as input, and modifies this random noise to make it look like the given example texture. The synthesized texture (right) can be of arbitrary size, and is perceived as very similar to the given ..."
Abstract

Cited by 559 (12 self)
 Add to MetaCart
Field texture models and generates textures through a deterministic searching process. We accelerate this synthesis process using treestructured vector quantization.
Visual categorization with bags of keypoints
 In Workshop on Statistical Learning in Computer Vision, ECCV
, 2004
"... Abstract. We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of im ..."
Abstract

Cited by 1004 (14 self)
 Add to MetaCart
Abstract. We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors
Video google: A text retrieval approach to object matching in videos
 In ICCV
, 2003
"... We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, ill ..."
Abstract

Cited by 1636 (42 self)
 Add to MetaCart
computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieval is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching on two full length feature films. 1.
Similarity of Color Images
, 1995
"... We describe two new color indexing techniques. The first one is a more robust version of the commonly used color histogram indexing. In the index we store the cumulative color histograms. The L 1 , L 2 , or L1 distance between two cumulative color histograms can be used to define a similarity mea ..."
Abstract

Cited by 497 (2 self)
 Add to MetaCart
We describe two new color indexing techniques. The first one is a more robust version of the commonly used color histogram indexing. In the index we store the cumulative color histograms. The L 1 , L 2 , or L1 distance between two cumulative color histograms can be used to define a similarity
Linear spatial pyramid matching using sparse coding for image classification
 in IEEE Conference on Computer Vision and Pattern Recognition(CVPR
, 2009
"... Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n 2 ∼ n 3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algo ..."
Abstract

Cited by 496 (20 self)
 Add to MetaCart
the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multiscale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 674 (15 self)
 Add to MetaCart
. That is, we replaced the reference to >.� ) in and similarly for 11"�) in Equation 3, where 0 :::; J.l :::; 1 is the momentum term. It is easy to show that if the modified system of equations converges to a fixed point F, then F is also a fixed point of the original system (since if>.� ) = >
Policy gradient methods for reinforcement learning with function approximation.
 In NIPS,
, 1999
"... Abstract Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly repres ..."
Abstract

Cited by 437 (20 self)
 Add to MetaCart
at each time is characterized by a policy, π(s, a, θ) = P r {a t = as t = s, θ}, ∀s ∈ S, a ∈ A, where θ ∈ l , for l << S, is a parameter vector. We assume that π is diffentiable with respect to its parameter, i.e., that ∂π(s,a) ∂θ exists. We also usually write just π(s, a) for π(s, a, θ
Efficient Implementation of Weighted ENO Schemes
, 1995
"... In this paper, we further analyze, test, modify and improve the high order WENO (weighted essentially nonoscillatory) finite difference schemes of Liu, Osher and Chan [9]. It was shown by Liu et al. that WENO schemes constructed from the r th order (in L¹ norm) ENO schemes are (r +1) th order accur ..."
Abstract

Cited by 415 (38 self)
 Add to MetaCart
In this paper, we further analyze, test, modify and improve the high order WENO (weighted essentially nonoscillatory) finite difference schemes of Liu, Osher and Chan [9]. It was shown by Liu et al. that WENO schemes constructed from the r th order (in L¹ norm) ENO schemes are (r +1) th order
A Growing Neural Gas Network Learns Topologies
 Advances in Neural Information Processing Systems 7
, 1995
"... An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebblike learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this m ..."
Abstract

Cited by 402 (5 self)
 Add to MetaCart
), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation. 1 INTRODUCTION In unsupervised learning settings only input
SVMTorch: Support Vector Machines for LargeScale Regression Problems
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2001
"... Support Vector Machines (SVMs) for regression problems are trained by solving a quadratic optimization problem which needs on the order of l 2 memory and time resources to solve, where l is the number of training examples. In this paper, we propose a decomposition algorithm, SVMTorch 1 , whic ..."
Abstract

Cited by 314 (10 self)
 Add to MetaCart
Support Vector Machines (SVMs) for regression problems are trained by solving a quadratic optimization problem which needs on the order of l 2 memory and time resources to solve, where l is the number of training examples. In this paper, we propose a decomposition algorithm, SVMTorch 1
Results 1  10
of
1,385,235