Results 1 
5 of
5
Toward a unified theory of sparse dimensionality reduction in Euclidean space, arXiv:1311.2542
"... Abstract. Let Φ ∈ Rm×n be a sparse JohnsonLindenstrauss transform [KN14] with s nonzeroes per column. For a subset T of the unit sphere, ε ∈ (0, 1/2) given, we study settings for m, s required to ensure E Φ sup x∈T ∣∣‖Φx‖22 − 1∣ ∣ < ε, i.e. so that Φ preserves the norm of every x ∈ T simultaneo ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Let Φ ∈ Rm×n be a sparse JohnsonLindenstrauss transform [KN14] with s nonzeroes per column. For a subset T of the unit sphere, ε ∈ (0, 1/2) given, we study settings for m, s required to ensure E Φ sup x∈T ∣∣‖Φx‖22 − 1∣ ∣ < ε, i.e. so that Φ preserves the norm of every x ∈ T simultaneously and multiplicatively up to 1 + ε. We introduce a new complexity parameter, which depends on the geometry of T, and show that it suffices to choose s and m such that this parameter is small. Our result is a sparse analog of Gordon’s theorem, which was concerned with a dense Φ having i.i.d. Gaussian entries. We qualitatively unify several results related to the JohnsonLindenstrauss lemma, subspace embeddings, and Fourierbased restricted isometries. Our work also
Sketching as a tool for numerical linear algebra
 Foundations and Trends in Theoretical Computer Science
"... ar ..."
(Show Context)
Fast Ridge Regression with Randomized Principal Component Analysis and Gradient Descent
"... We propose a new two stage algorithm LING for large scale regression problems. LING has the same risk as the well known Ridge Regression under the fixed design setting and can be computed much faster. Our experiments have shown that LING performs well in terms of both prediction accuracy and comput ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We propose a new two stage algorithm LING for large scale regression problems. LING has the same risk as the well known Ridge Regression under the fixed design setting and can be computed much faster. Our experiments have shown that LING performs well in terms of both prediction accuracy and computational efficiency compared with other large scale regression algorithms like Gradient Descent, Stochastic Gradient Descent and Principal Component Regres