Results 1  10
of
16,787
Sparser JohnsonLindenstrauss Transforms
"... We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with probability 1−δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix in the s ..."
Abstract
 Add to MetaCart
We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with probability 1−δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix in the supports of our distributions. These are the first distributions to provide o(k) sparsity for all values of ε, δ. Previously the best known construction obtained s = ˜ Θ(ε −1 log 2 (1/δ)) 1 [DasguptaKumarSarlós, STOC 2010] 2. In addition, one of our distributions can be sampled from a seed of O(log(1/δ) log d) uniform random bits. Some applications that use JohnsonLindenstrauss embeddings as a black box, such as those in approximate numerical linear algebra ([Sarlós, FOCS 2006], [ClarksonWoodruff, STOC 2009]), require exponentially small δ. Our linear dependence on log(1/δ) in the sparsity is thus crucial in these applications to obtain speedup.
Sparser JohnsonLindenstrauss Transforms
"... We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with probability 1−δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix in the s ..."
Abstract
 Add to MetaCart
We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with probability 1−δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix in the supports of our distributions. These are the first distributions to provide o(k) sparsity for all values of ε, δ. Previously the best known construction obtained s = ˜ Θ(ε −1 log 2 (1/δ)) 1 [DasguptaKumarSarlós, STOC 2010] 2. In addition, one of our distributions can be sampled from a seed of O(log(1/δ) log d) uniform random bits. Some applications that use JohnsonLindenstrauss embeddings as a black box, such as those in approximate numerical linear algebra ([Sarlós, FOCS 2006], [ClarksonWoodruff, STOC 2009]), require exponentially small δ. Our linear dependence on log(1/δ) in the sparsity is thus crucial in these applications to obtain speedup.
A Sparser JohnsonLindenstrauss Transform
"... We give a JohnsonLindenstrauss transform with column sparsity s = Θ(ε −1 log(1/δ)) into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with success probability 1−δ. This is the first distribution to provide an asymptotic improvement over the Θ(k) sparsity bound for all values of ε ..."
Abstract
 Add to MetaCart
We give a JohnsonLindenstrauss transform with column sparsity s = Θ(ε −1 log(1/δ)) into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with success probability 1−δ. This is the first distribution to provide an asymptotic improvement over the Θ(k) sparsity bound for all values of ε, δ. Previous work of [DasguptaKumarSarlós, STOC 2010] gave a distribution with s = Õ(ε−1 log 3 (1/δ)) 1, with tighter analyses later in [KaneNelson, CoRR abs/1006.3585] and [BravermanOstrovskyRabani, CoRR abs/1011.2590] showing that their construction achieves s = Õ(ε−1 log 2 (1/δ)). As in the previous work, our scheme only requires limited independence hash functions. In fact, potentially one of our hash functions could be made deterministic given an explicit construction of a sufficiently good errorcorrecting code. Our linear dependence on log(1/δ) in the sparsity allows us to plug our construction into algorithms of [ClarksonWoodruff, STOC 2009] to achieve the fastest known streaming algorithms for numerical linear algebra problems such as approximate linear regression and best rankk approximation. Their reductions to the JohnsonLindenstrauss lemma require exponentially small δ, and thus a superlinear dependence on log(1/δ) in s leads to significantly slower algorithms. 1
Sparser, Better, Faster GPU Parsing
"... Due to their origin in computer graphics, graphics processing units (GPUs) are highly optimized for dense problems, where the exact same operation is applied repeatedly to all data points. Natural language processing algorithms, on the other hand, are traditionally constructed in ways that exploit s ..."
Abstract
 Add to MetaCart
Due to their origin in computer graphics, graphics processing units (GPUs) are highly optimized for dense problems, where the exact same operation is applied repeatedly to all data points. Natural language processing algorithms, on the other hand, are traditionally constructed in ways that exploit structural sparsity. Recently, Canny et al. (2013) presented an approach to GPU parsing that sacrifices traditional sparsity in exchange for raw computational power, obtaining a system that can compute Viterbi parses for a highquality grammar at about 164 sentences per second on a midrange GPU. In this work, we reintroduce sparsity to GPU parsing by adapting a coarsetofine pruning approach to the constraints of a GPU. The resulting system is capable of computing over 404 Viterbi parses per second—more than a 2x speedup—on the same hardware. Moreover, our approach allows us to efficiently implement less GPUfriendly minimum Bayes risk inference, improving throughput for this more accurate algorithm from only 32 sentences per second unpruned to over 190 sentences per second using pruning—nearly a 6x speedup. 1
Learning Overcomplete Representations
, 2000
"... In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can ..."
Abstract

Cited by 355 (10 self)
 Add to MetaCart
be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete
Sparser Relative Bundle Adjustment (SRBA): constanttime
"... maintenance and local optimization of arbitrarily large maps ..."
The group Lasso for logistic regression
 Journal of the Royal Statistical Society, Series B
, 2008
"... Summary. The group lasso is an extension of the lasso to do variable selection on (predefined) groups of variables in linear regression models. The estimates have the attractive property of being invariant under groupwise orthogonal reparameterizations. We extend the group lasso to logistic regressi ..."
Abstract

Cited by 275 (11 self)
 Add to MetaCart
consistent even if the number of predictors is much larger than sample size but with sparse true underlying structure. We further use a twostage procedure which aims for sparser models than the group lasso, leading to improved prediction performance for some cases. Moreover, owing to the twostage nature
Recovering 3D Human Pose from Monocular Images
"... We describe a learning based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descrip ..."
Abstract

Cited by 261 (0 self)
 Add to MetaCart
We describe a learning based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape
A cartesian cloaking comprised of gradually sparser dielectric layers exploiting Snell’s law
 6th European Conference on Antennas and Propagation (EUCAP) (2012), 2688–2692. AU TH O R CO PY
"... Abstract—A Cartesian model for perfect cloaks that lead the incident electromagnetic wave around the cloaked object, has been introduced. A structure of layered slabs made from gradually electromagnetically sparser materials is excited by a line source located in the vicinity of the front surface of ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—A Cartesian model for perfect cloaks that lead the incident electromagnetic wave around the cloaked object, has been introduced. A structure of layered slabs made from gradually electromagnetically sparser materials is excited by a line source located in the vicinity of the front surface
3D Human Pose from Silhouettes by Relevance Vector Regression
 In CVPR
, 2004
"... We describe a learning based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descript ..."
Abstract

Cited by 199 (8 self)
 Add to MetaCart
We describe a learning based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape
Results 1  10
of
16,787