Results 1  10
of
15
Sample Complexity of Dictionary Learning and other Matrix Factorizations
, 2013
"... HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
SumofSquares Proofs and the Quest toward Optimal Algorithms
"... Abstract. In order to obtain the bestknown guarantees, algorithms are traditionally tailored to the particular problem we want to solve. Two recent developments, the Unique Games Conjecture (UGC) and the SumofSquares (SOS) method, surprisingly suggest that this tailoring is not necessary and that ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract. In order to obtain the bestknown guarantees, algorithms are traditionally tailored to the particular problem we want to solve. Two recent developments, the Unique Games Conjecture (UGC) and the SumofSquares (SOS) method, surprisingly suggest that this tailoring is not necessary and that a single efficient algorithm could achieve best possible guarantees for a wide range of different problems. The Unique Games Conjecture (UGC) is a tantalizing conjecture in computational complexity, which, if true, will shed light on the complexity of a great many problems. In particular this conjecture predicts that a single concrete algorithm provides optimal guarantees among all efficient algorithms for a large class of computational problems. The SumofSquares (SOS) method is a general approach for solving systems of polynomial constraints. This approach is studied in several scientific disciplines, including real algebraic geometry, proof complexity, control theory, and mathematical programming, and has found applications in fields as diverse as quantum information theory, formal verification, game theory and many others. We survey some connections that were recently uncovered between the Unique Games Conjecture and the SumofSquares method. In particular, we discuss new tools to rigorously bound the running time of the SOS method for obtaining approximate solutions to hard optimization problems, and how these tools give the potential for the sumofsquares method to provide new guarantees for many problems of interest, and possibly to even refute the UGC.
Nearest Neighbors Using Compact Sparse Codes
, 2014
"... HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
JMLR: Workshop and Conference Proceedings vol 35:1–16, 2014 Robust Multiobjective Learning with Mentor Feedback
"... We study decision making when each action is described by a set of objectives, all of which are to be maximized. During the training phase, we have access to the actions of an outside agent (“mentor”). In the test phase, our goal is to maximally improve upon the mentor’s (unobserved) actions across ..."
Abstract
 Add to MetaCart
We study decision making when each action is described by a set of objectives, all of which are to be maximized. During the training phase, we have access to the actions of an outside agent (“mentor”). In the test phase, our goal is to maximally improve upon the mentor’s (unobserved) actions across all objectives. We present an algorithm with a vanishing regret compared with the optimal possible improvement, and show that our regret bound is the best possible. The bound is independent of the number of actions, and scales only as the logarithm of the number of objectives.
Complete Dictionary Recovery Using Nonconvex Optimization
"... We consider the problem of recovering a complete (i.e., square and invertible) dictionary A0, from Y = A0X0 with Y ∈ Rn×p. This recovery setting is central to the theoretical understanding of dictionary learning. We give the first efficient algorithm that provably recoversA0 whenX0 has O (n) nonze ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the problem of recovering a complete (i.e., square and invertible) dictionary A0, from Y = A0X0 with Y ∈ Rn×p. This recovery setting is central to the theoretical understanding of dictionary learning. We give the first efficient algorithm that provably recoversA0 whenX0 has O (n) nonzeros per column, under suitable probability model forX0. Prior results provide recovery guarantees whenX0 has only O ( n) nonzeros per column. Our algorithm is based on nonconvex optimization with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. Our proofs give a geometric characterization of the highdimensional objective landscape, which shows that with high probability there are no spurious local minima. Experiments with synthetic data corroborate our theory. Full version of this paper is available online:
Associative Memory via a Sparse Recovery Model
"... An associative memory is a structure learned from a datasetM of vectors (signals) in a way such that, given a noisy version of one of the vectors as input, the nearest valid vector fromM (nearest neighbor) is provided as output, preferably via a fast iterative algorithm. Traditionally, binary (or q ..."
Abstract
 Add to MetaCart
An associative memory is a structure learned from a datasetM of vectors (signals) in a way such that, given a noisy version of one of the vectors as input, the nearest valid vector fromM (nearest neighbor) is provided as output, preferably via a fast iterative algorithm. Traditionally, binary (or qary) Hopfield neural networks are used to model the above structure. In this paper, for the first time, we propose a model of associative memory based on sparse recovery of signals. Our basic premise is simple. For a dataset, we learn a set of linear constraints that every vector in the dataset must satisfy. Provided these linear constraints possess some special properties, it is possible to cast the task of finding nearest neighbor as a sparse recovery problem. Assuming generic random models for the dataset, we show that it is possible to store superpolynomial or exponential number of nlength vectors in a neural network of size O(n). Furthermore, given a noisy version of one of the stored vectors corrupted in nearlinear number of coordinates, the vector can be correctly recalled using a neurally feasible algorithm. 1
Guaranteed NonOrthogonal Tensor Decomposition via Alternating Rank1 Updates
, 2014
"... A simple alternating rank1 update procedure is considered for CP tensor decomposition. Local convergence guarantees are established for third order tensors of rank k in d dimensions, when k = o(d1.5) and the tensor components are incoherent. We strengthen the results to global convergence guarantee ..."
Abstract
 Add to MetaCart
(Show Context)
A simple alternating rank1 update procedure is considered for CP tensor decomposition. Local convergence guarantees are established for third order tensors of rank k in d dimensions, when k = o(d1.5) and the tensor components are incoherent. We strengthen the results to global convergence guarantees when k ≤ Cd (for arbitrary constant C> 1) through a simple initialization procedure based on rank1 singular value decomposition of random tensor slices. The guarantees also provide tight perturbation analysis given noisy tensor.