Results 1  10
of
4,115,050
Hierarchical clustering procedures that minimize a hybrid L1/L2 loss function
"... deviations Several methods have been proposed (e.g., Carroll & Pruzansky, 1980; Hartigan, 1967, 1985; Hubert and Arabie, 1995) that fit ultrametric constraints to a twoway, onemode proximity matrix with the goal of minimizing a leastsquares (L2) loss function, 2 L p − d, ..."
Abstract
 Add to MetaCart
deviations Several methods have been proposed (e.g., Carroll & Pruzansky, 1980; Hartigan, 1967, 1985; Hubert and Arabie, 1995) that fit ultrametric constraints to a twoway, onemode proximity matrix with the goal of minimizing a leastsquares (L2) loss function, 2 L p − d,
Beyond L2Loss Functions for Learning Sparse Models
"... Incorporating sparsity priors in learning tasks can give rise to simple, and interpretable models for complex high dimensional data. Sparse models have found widespread use in structure discovery, recovering data from corruptions, and a variety of large scale unsupervised and supervised learning pr ..."
Abstract
 Add to MetaCart
oped for both batch and online learning cases. However, new application domains motivate looking beyond conventional loss functions. For example, robust loss functions such as `1 and Huber are useful in learning outlierresilient models, and the quantile loss is beneficial in discovering structures
The ratedistortion function for source coding with side information at the decoder
 IEEE Trans. Inform. Theory
, 1976
"... AbstractLet {(X,, Y,J}r = 1 be a sequence of independent drawings of a pair of dependent random variables X, Y. Let us say that X takes values in the finite set 6. It is desired to encode the sequence {X,} in blocks of length n into a binary stream*of rate R, which can in turn be decoded as a seque ..."
Abstract

Cited by 1045 (1 self)
 Add to MetaCart
sequence { 2k}, where zk E %, the reproduction alphabet. The average distorjion level is (l/n) cl = 1 E[D(X,,z&, where D(x, $ 2 0, x E I, 2 E J, is a preassigned distortion measure. The special assumption made here is that the decoder has access to the side information {Yk}. In this paper we determine
LucasKanade 20 Years On: A Unifying Framework: Part 3
 International Journal of Computer Vision
, 2002
"... Since the LucasKanade algorithm was proposed in 1981 image alignment has become one of the most widely used techniques in computer vision. Applications range from optical flow, tracking, and layered motion, to mosaic construction, medical image registration, and face coding. Numerous algorithms hav ..."
Abstract

Cited by 695 (30 self)
 Add to MetaCart
first consider linear appearance variation when the error function is the Euclidean L2 norm. We describe three different algorithms, the simultaneous, project out, and normalization inverse compositional algorithms, and empirically compare them. Afterwards we consider the combination of linear
A theory for multiresolution signal decomposition : the wavelet representation
 IEEE Transaction on Pattern Analysis and Machine Intelligence
, 1989
"... AbstractMultiresolution representations are very effective for analyzing the information content of images. We study the properties of the operator which approximates a signal at a given resolution. We show that the difference of information between the approximation of a signal at the resolutions ..."
Abstract

Cited by 3462 (12 self)
 Add to MetaCart
2 ’ + ’ and 2jcan be extracted by decomposing this signal on a wavelet orthonormal basis of L*(R”). In LL(R), a wavelet orthonormal basis is a family of functions ( @ w (2’ ~n)),,,“jEZt, which is built by dilating and translating a unique function t+r (xl. This decomposition defines an orthogonal
Grounding in communication
 In
, 1991
"... We give a general analysis of a class of pairs of positive selfadjoint operators A and B for which A + XB has a limit (in strong resolvent sense) as h10 which is an operator A, # A! Recently, Klauder [4] has discussed the following example: Let A be the operator(d2/A2) + x2 on L2(R, dx) and let ..."
Abstract

Cited by 1087 (19 self)
 Add to MetaCart
B = 1 x 1s. The eigenvectors and eigenvalues of A are, of course, well known to be the Hermite functions, H,(x), n = 0, l,... and E, = 2n + 1. Klauder then considers the eigenvectors of A + XB (A> 0) by manipulations with the ordinary differential equation (we consider the domain questions
Error and attack tolerance of complex networks
, 2000
"... Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network [1]. C ..."
Abstract

Cited by 974 (6 self)
 Add to MetaCart
]. Complex communication networks [2] display a surprising degree of robustness: while key components regularly malfunction, local failures rarely lead to the loss of the global informationcarrying ability of the network. The stability of these and other complex systems is often attributed to the redundant
The Concept of a Linguistic Variable and its Application to Approximate Reasoning
 Journal of Information Science
, 1975
"... By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. I:or example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very oldand not very young, et ..."
Abstract

Cited by 1350 (9 self)
 Add to MetaCart
rule which generates the terms in T(z); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U The meaning of a linguistic value X is characterized by a compatibility function, c: l / + [0, I], which associates with each u in U
A review of image denoising algorithms, with a new one
 SIMUL
, 2005
"... The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. All show an outstanding perf ..."
Abstract

Cited by 500 (6 self)
 Add to MetaCart
and their explanation as a violation of the image model; quantitative experimental: by tables of L 2 distances of the denoised version to the original image. The most powerful evaluation method seems, however, to be the visualization of the method noise on natural images. The more this method noise looks like a real
The irreducibility of the space of curves of given genus
 Publ. Math. IHES
, 1969
"... Fix an algebraically closed field k. Let Mg be the moduli space of curves of genus g over k. The main result of this note is that Mg is irreducible for every k. Of course, whether or not M s is irreducible depends only on the characteristic of k. When the characteristic s o, we can assume that k ~ ..."
Abstract

Cited by 503 (2 self)
 Add to MetaCart
strengthened his method so that it applies in all characteristics (SGA 7, ~968) 9 Mumford has also given a proof using theta functions in char. ~2. The result is this: Stable Reduction Theorem. Let R be a discrete valuation ring with quotient field K. Let A be an abelian variety over K. Then there exists a
Results 1  10
of
4,115,050