Results 1  10
of
11,485
Optimal Euclidean spanners: really short, thin and lanky (Extended Abstract)
 STOC'13
, 2013
"... The degree, the (hop)diameter, and the weight are the most basic and wellstudied parameters of geometric spanners. In a seminal STOC’95 paper, titled“Euclidean spanners: short, thin and lanky”, Arya et al. [2] devised a construction of Euclidean (1 + ɛ)spanners that achieves constant degree, diam ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
, diameter O(log n), weight O(log 2 n) · ω(MST), and has running time O(n · log n). This construction applies to npoint constantdimensional Euclidean spaces. Moreover, Arya et al. conjectured that the weight bound can be improved by a logarithmic factor, without increasing the degree and the diameter
Metric embeddings with relaxed guarantees
 IN PROCEEDINGS OF THE 46TH IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2005
"... We consider the problem of embedding finite metrics with slack: we seek to produce embeddings with small dimension and distortion while allowing a (small) constant fraction of all distances to be arbitrarily distorted. This definition is motivated by recent research in the networking community, whic ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
be achieved in general: any finite metric can be embedded, with constant slack and constant distortion, into constantdimensional Euclidean space. We then show that there exist stronger embeddings into ℓ1 which exhibit
Learning the Kernel Matrix with SemiDefinite Programming
, 2002
"... Kernelbased learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information ..."
Abstract

Cited by 775 (21 self)
 Add to MetaCart
Kernelbased learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information
A Simple Proof of the Restricted Isometry Property for Random Matrices
 CONSTR APPROX
, 2008
"... We give a simple technique for verifying the Restricted Isometry Property (as introduced by Candès and Tao) for random matrices that underlies Compressed Sensing. Our approach has two main ingredients: (i) concentration inequalities for random inner products that have recently provided algorithmical ..."
Abstract

Cited by 631 (64 self)
 Add to MetaCart
algorithmically simple proofs of the Johnson–Lindenstrauss lemma; and (ii) covering numbers for finitedimensional balls in Euclidean space. This leads to an elementary proof of the Restricted Isometry Property and brings out connections between Compressed Sensing and the Johnson–Lindenstrauss lemma. As a result
The selfduality equations on a Riemann surface
 Proc. Lond. Math. Soc., III. Ser
, 1987
"... In this paper we shall study a special class of solutions of the selfdual YangMills equations. The original selfduality equations which arose in mathematical physics were defined on Euclidean 4space. The physically relevant solutions were the ones with finite action—the socalled 'instanton ..."
Abstract

Cited by 502 (6 self)
 Add to MetaCart
In this paper we shall study a special class of solutions of the selfdual YangMills equations. The original selfduality equations which arose in mathematical physics were defined on Euclidean 4space. The physically relevant solutions were the ones with finite action—the socalled &apos
Learnability and the VapnikChervonenkis dimension
, 1989
"... Valiant’s learnability model is extended to learning classes of concepts defined by regions in Euclidean space E”. The methods in this paper lead to a unified treatment of some of Valiant’s results, along with previous results on distributionfree convergence of certain pattern recognition algorith ..."
Abstract

Cited by 727 (22 self)
 Add to MetaCart
Valiant’s learnability model is extended to learning classes of concepts defined by regions in Euclidean space E”. The methods in this paper lead to a unified treatment of some of Valiant’s results, along with previous results on distributionfree convergence of certain pattern recognition
Directional Statistics and Shape Analysis
, 1995
"... There have been various developments in shape analysis in the last decade. We describe here some relationships of shape analysis with directional statistics. For shape, rotations are to be integrated out or to be optimized over whilst they are the basis for directional statistics. However, various c ..."
Abstract

Cited by 794 (33 self)
 Add to MetaCart
to shape analysis. Note that the idea of using tangent space for analysis is common to both manifold as well. 1 Introduction Consider shapes of configurations of points in Euclidean space. There are various contexts in which k labelled points (or "landmarks") x 1 ; :::; x k in IR m are given
From Few to many: Illumination cone models for face recognition under variable lighting and pose
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... We present a generative appearancebased method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a smal ..."
Abstract

Cited by 754 (12 self)
 Add to MetaCart
conditions. The pose space is then sampled, and for each pose the corresponding illumination cone is approximated by a lowdimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated
Localitysensitive hashing scheme based on pstable distributions
 In SCG ’04: Proceedings of the twentieth annual symposium on Computational geometry
, 2004
"... inÇÐÓ�Ò We present a novel LocalitySensitive Hashing scheme for the Approximate Nearest Neighbor Problem underÐÔnorm, based onÔstable distributions. Our scheme improves the running time of the earlier algorithm for the case of theÐnorm. It also yields the first known provably efficient approximate ..."
Abstract

Cited by 521 (8 self)
 Add to MetaCart
NN algorithm for the caseÔ�. We also show that the algorithm finds the exact near neigbhor time for data satisfying certain “bounded growth ” condition. Unlike earlier schemes, our LSH scheme works directly on points in the Euclidean space without embeddings. Consequently, the resulting query time
Efficient similarity search in sequence databases
, 1994
"... We propose an indexing method for time sequences for processing similarity queries. We use the Discrete Fourier Transform (DFT) to map time sequences to the frequency domain, the crucial observation being that, for most sequences of practical interest, only the first few frequencies are strong. Anot ..."
Abstract

Cited by 515 (19 self)
 Add to MetaCart
. Another important observation is Parseval's theorem, which specifies that the Fourier transform preserves the Euclidean distance in the time or frequency domain. Having thus mapped sequences to a lowerdimensionality space by using only the first few Fourier coe cients, we use Rtrees to index
Results 1  10
of
11,485