Results 1  10
of
50
ATOMIC DECOMPOSITION BY BASIS PURSUIT
, 1995
"... The TimeFrequency and TimeScale communities have recently developed a large number of overcomplete waveform dictionaries  stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for d ..."
Abstract

Cited by 2725 (61 self)
 Add to MetaCart
(Show Context)
The TimeFrequency and TimeScale communities have recently developed a large number of overcomplete waveform dictionaries  stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the Method of Frames (MOF), Matching Pursuit (MP), and, for special dictionaries, the Best Orthogonal Basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l 1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP and BOB, including better sparsity, and superresolution. BP has interesting relations to ideas in areas as diverse as illposed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. Basis Pursuit in highly overcomplete dictionaries leads to largescale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interiorpoint methods. We obtain reasonable success with a primaldual logarithmic barrier method and conjugategradient solver.
Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ¹ minimization
 PROC. NATL ACAD. SCI. USA 100 2197–202
, 2002
"... Given a ‘dictionary’ D = {dk} of vectors dk, we seek to represent a signal S as a linear combination S = ∑ k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered ..."
Abstract

Cited by 632 (38 self)
 Add to MetaCart
Given a ‘dictionary’ D = {dk} of vectors dk, we seek to represent a signal S as a linear combination S = ∑ k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered the special case where D is an overcomplete system consisting of exactly two orthobases, and has shown that, under a condition of mutual incoherence of the two bases, and assuming that S has a sufficiently sparse representation, this representation is unique and can be found by solving a convex optimization problem: specifically, minimizing the ℓ¹ norm of the coefficients γ. In this paper, we obtain parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems. We introduce the Spark, ameasure of linear dependence in such a system; it is the size of the smallest linearly dependent subset (dk). We show that, when the signal S has a representation using less than Spark(D)/2 nonzeros, this representation is necessarily unique. We
Basis pursuit.
 In IEEE the TwentyEighth Asilomar Conference onSignals, Systems and Computers,
, 1994
"... ..."
Las Vegas algorithms for linear and integer programming when the dimension is small
 J. ACM
, 1995
"... Abstract. This paper gives an algcmthm for solving linear programming problems. For a problem with tz constraints and d variables, the algorithm requires an expected O(d’n) + (log n)o(d)d’’+(’(’) + o(dJA log n) arithmetic operations, as rz ~ ~. The constant factors do not depend on d. Also, an algor ..."
Abstract

Cited by 112 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper gives an algcmthm for solving linear programming problems. For a problem with tz constraints and d variables, the algorithm requires an expected O(d’n) + (log n)o(d)d’’+(’(’) + o(dJA log n) arithmetic operations, as rz ~ ~. The constant factors do not depend on d. Also, an algorlthm N gwen for integer hnear programmmg. Let p bound the number of bits required to specify the ratmnal numbers defmmg an input constraint or the ob~ective function vector. Let n and d be as before. Then, the algorithm requires expected 0(2d dn + S~dm In n) + dc)’d) ~ in H operations on numbers with O(1~p bits d ~ ~ ~z + ~, where the constant factors do not depend on d or p. The expectations are with respect to the random choices made by the algorithms, and the bounds hold for any gwen input. The techmque can be extended to other convex programming problems. For example, m algorlthm for finding the smallest sphere enclosing a set of /z points m Ed has the same t]me bound
Modeling the space of camera response functions,”
 IEEE Trans. Pattern Anal. Mach. Intell.,
, 2004
"... ..."
(Show Context)
APPROXIMATING CENTER POINTS WITH ITERATIVE Radon Points
 INTERNATIONAL JOURNAL OF COMPUTATIONAL GEOMETRY & APPLICATIONS
, 1995
"... ..."
(Show Context)
Low Entropy Coding with Unsupervised Neural Networks
"... ed on visual and speech data. The ability of the network to automatically generate wavelet codes from natural images is demonstrated. These bear a close resemblance to 2D Gabor functions, which have previously been used to describe physiological receptive fields, and as a means of producing compact ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
ed on visual and speech data. The ability of the network to automatically generate wavelet codes from natural images is demonstrated. These bear a close resemblance to 2D Gabor functions, which have previously been used to describe physiological receptive fields, and as a means of producing compact image representations. Keywords: neural networks, unsupervised learning, selforganisation, feature extraction, information theory, redundancy reduction, sparse coding, imaging models, occlusion, image coding, speech coding. Declaration This dissertation is the result of my own original work, except where reference is made to the work of others. No part of it has been submitted for any other university degree or diploma. Its length, including captions, footnotes, appendix and bibliography, is approximately 58000 words. Acknowledgements I would like first and foremost to thank Richard Prager, my supervisor, fo
Robust realtime face pose and facial expression recovery
 Proc. of CVPR’06
, 2008
"... Face motion is the sum of rigid motion related with face pose and nonrigid motion related with facial expression. Both motions are coupled in the captured image so that they can not be easily recovered from the image directly. In this paper, a novel technique is proposed to recover 3D face pose and ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
Face motion is the sum of rigid motion related with face pose and nonrigid motion related with facial expression. Both motions are coupled in the captured image so that they can not be easily recovered from the image directly. In this paper, a novel technique is proposed to recover 3D face pose and facial expression simultaneously from a monocular video sequence in real time. First, twentyeight salient facial features are detected and tracked robustly under various face orientations and facial expressions. Second, after modelling the coupling between face pose and facial expression in the 2D image as a nonlinear function, a normalized SVD (NSVD) decomposition technique is proposed to recover the pose and expression parameters analytically. A nonlinear technique is subsequently utilized to refine the solution obtained from the NSVD technique by imposing the orthonormality constraint on the pose parameters. Compared to the original SVD technique proposed in [1], which is very sensitive to the image noise and numerically unstable in practice, the proposed method can recover the face pose and facial expression robustly and accurately. Finally, the performance of the proposed technique is evaluated in the experiments using both synthetic and real image sequences. 1.
Preconditioning KKT Systems
, 2002
"... This research presents new preconditioners for linear systems. We proceed from the most general case to the very specific problem area of sparse optimal control. In the first most general approach, we assume only that the coefficient matrix is nonsingular. We target highly indefinite, nonsymmetric p ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
This research presents new preconditioners for linear systems. We proceed from the most general case to the very specific problem area of sparse optimal control. In the first most general approach, we assume only that the coefficient matrix is nonsingular. We target highly indefinite, nonsymmetric problems that cause difficulties for preconditioned iterative solvers, and where standard preconditioners, like incomplete factorizations, often fail. We experiment with nonsymmetric permutations and scalings aimed at placing large entries on the diagonal in the context of preconditioning for general sparse matrices. Our numerical experiments indicate that the reliability and performance of preconditioned iterative solvers are greatly enhanced by such preprocessing. Secondly, we present two new preconditioners for KKT systems. KKT systems arise in areas such as quadratic programming, sparse optimal control, and mixed finite element formulations. Our preconditioners approximate a constraint preconditioner with incomplete factorizations for the normal equations. Numerical experiments compare these two preconditioners with exact constraint preconditioning and the approach described above of permuting large entries to the diagonal. Finally, we turn to a specific problem area: sparse optimal control. Many optimal control problems are broken into several phases, and within a phase, most variables and constraints depend only on nearby variables and constraints. However, free initial and final times and timeindependent parameters impact variables and constraints throughout a phase, resulting in dense factored blocks in the KKT matrix. We drop fill due to these variables to reduce density within each phase. The resulting preconditioner is tightly banded and nearly block tridiagonal. Numerical experiments demonstrate that the preconditioners are effective, with very little fill in the factorization.
A new measure of the robustness of biochemical networks
 Bioinformatics
, 2005
"... doi:10.1093/bioinformatics/bti348 ..."
(Show Context)