• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

On projection algorithms for solving convex feasibility problems. (1996)

by H H Bauschke, J M Borwein
Venue:SIAM Rev.
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 331
Next 10 →

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

by Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein , 2010
"... ..."
Abstract - Cited by 1001 (20 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...· · ∩ AN splits into the sum of the indicator functions of each Ai. There is a large literature on successive projection algorithms and their many applications; see the survey by Bauschke and Borwein =-=[BB96]-=- for a general overview, Combettes [Com96] for applications to image processing, and Censor and Zenios [CZ97, §5] for a discussion in the context of parallel optimization. 23 5.2 Linear and quadratic ...

For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1-norm Solution is also the Sparsest Solution

by David L. Donoho - Comm. Pure Appl. Math , 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract - Cited by 568 (10 self) - Add to MetaCart
We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n &lt; m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that for large n, and for all Φ’s except a negligible fraction, the following property holds: For every y having a representation y = Φα0 by a coefficient vector α0 ∈ R m with fewer than ρ · n nonzeros, the solution α1 of the ℓ 1 minimization problem min �x�1 subject to Φα = y is unique and equal to α0. In contrast, heuristic attempts to sparsely solve such systems – greedy algorithms and thresholding – perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almost-spherical sections in Banach space theory, and deviation bounds for the eigenvalues of random Wishart matrices.
(Show Context)

Citation Context

... to create a dual feasible point y starting from a nearby almostfeasible point y0. It is an instance of the successive projection method for finding feasible points for systems of linear inequalities =-=[1]-=-. Let I0 be the collection of indices 1 ≤ i ≤ m with and then set |〈φi, y0〉| > 1/2, y1 = y0 − PI0 y0, where PI0 denotes the least-squares projector ΦI0 (ΦT I0ΦI0 )−1ΦT . In effect, we identify the ind...

Proximal Splitting Methods in Signal Processing

by Patrick L. Combettes, Jean-Christophe Pesquet
"... The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems ..."
Abstract - Cited by 266 (31 self) - Add to MetaCart
The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems and, especially, in signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.
(Show Context)

Citation Context

...xn. (3) When ⋂ m i=1 Ci = ∅ the sequence (xn)n≥0 thus produced converges to a solution to (2) [22]. Projection algorithms have been enriched with many extensions of this basic iteration to solve (2) =-=[8]-=-, [39], [41], [87]. Variants have also been proposed to solve more general problems, e.g., that of finding the projection of a signal onto an intersection of convex sets [19], [44], [129]. Beyond such...

Solving monotone inclusions via compositions of nonexpansive averaged operators

by Patrick L. Combettes - Optimization , 2004
"... A unified fixed point theoretic framework is proposed to investigate the asymptotic behavior of algorithms for finding solutions to monotone inclusion problems. The basic iterative scheme under consideration involves nonstationary compositions of perturbed averaged nonexpansive operators. The analys ..."
Abstract - Cited by 136 (28 self) - Add to MetaCart
A unified fixed point theoretic framework is proposed to investigate the asymptotic behavior of algorithms for finding solutions to monotone inclusion problems. The basic iterative scheme under consideration involves nonstationary compositions of perturbed averaged nonexpansive operators. The analysis covers proximal methods for common zero problems as well as various splitting methods for finding a zero of the sum of monotone operators.
(Show Context)

Citation Context

... set Si, then the operator Jγi,nAi is the projector Pi onto Si and Corollary 4.3 and Remark 4.4 capture various convergence results for projection methods for solving convex feasibility problems, see =-=[9, 19]-=- and the references therein. In particular, if I = {1, . . . , m} is a finite index set, we recover the classical results of [27] for the cyclic projection method � xn+1 = xn + µn Pn (modulo m) +1xn −...

Equilibrium programming in Hilbert spaces

by Patrick L. Combettes, Sever A. Hirstoaga - 2005), 117–136. CONVERGENCE THEOREMS FOR EP FIX 91
"... Several methods for solving systems of equilibrium problems in Hilbert spaces – and for find-ing best approximations thereof – are presented and their convergence properties are established. The proposed methods include proximal-like block-iterative algorithms for general systems, as well as regular ..."
Abstract - Cited by 89 (4 self) - Add to MetaCart
Several methods for solving systems of equilibrium problems in Hilbert spaces – and for find-ing best approximations thereof – are presented and their convergence properties are established. The proposed methods include proximal-like block-iterative algorithms for general systems, as well as regularization and splitting algorithms for single equilibrium problems. The problem of constructing approximate equilibria in the case of inconsistent systems is also considered. 1
(Show Context)

Citation Context

...oblems, as well as certain fixed point problems (see also [15]). The above formulation extends this formalism to systems of such problems, covering in particular various forms of feasibility problems =-=[2, 11]-=-. We shall also address the problem of finding a best approximation to a point a ∈ H from the solutions to (1.1), namely project a onto S = � Si, where (∀i ∈ I) Si = � z ∈ K | (∀y ∈ K) Fi(z, y) ≥ 0 � ...

Designing Structured Tight Frames via an Alternating Projection Method

by Joel A. Tropp, Inderjit S. Dhillon , Robert W. Heath, Jr., Thomas Strohmer , 2003
"... Tight frames, also known as general Welch-BoundEquality sequences, generalize orthonormal systems. Numerous applications---including communications, coding and sparse approximation---require finite-dimensional tight frames that possess additional structural properties. This paper proposes an alterna ..."
Abstract - Cited by 87 (10 self) - Add to MetaCart
Tight frames, also known as general Welch-BoundEquality sequences, generalize orthonormal systems. Numerous applications---including communications, coding and sparse approximation---require finite-dimensional tight frames that possess additional structural properties. This paper proposes an alternating projection method that is versatile enough to solve a huge class of inverse eigenvalue problems, which includes the frame design problem. To apply this method, one only needs to solve a matrix nearness problem that arises naturally from the design specifications. Therefore, it is fast and easy to develop versions of the algorithm that target new design problems. Alternating projection will often succeed even if algebraic constructions are unavailable. To demonstrate

A douglas-Rachford splitting approach to nonsmooth convex variational signal recovery

by Patrick L. Combettes, Jean-christophe Pesquet - IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING , 2007
"... Under consideration is the large body of signal recovery problems that can be formulated as the problem of minimizing the sum of two (not necessarily smooth) lower semicontinuous convex functions in a real Hilbert space. This generic problem is analyzed and a decomposition method is proposed to so ..."
Abstract - Cited by 86 (22 self) - Add to MetaCart
Under consideration is the large body of signal recovery problems that can be formulated as the problem of minimizing the sum of two (not necessarily smooth) lower semicontinuous convex functions in a real Hilbert space. This generic problem is analyzed and a decomposition method is proposed to solve it. The convergence of the method, which is based on the Douglas-Rachford algorithm for monotone operator-splitting, is obtained under general conditions. Applications to non-Gaussian image denoising in a tight frame are also demonstrated.
(Show Context)

Citation Context

... + 1 2 �x − y�2 , (2) where ιC is the indicator function of C, i.e., � (∀y ∈ H) ιC(y) = 0, if y ∈ C; +∞, otherwise. Convex projection methods exploit the remarkable properties of projection operators =-=[6]-=-, [19] and, in order to broaden the scope of these methods, it is natural to introduce more general operators with similar properties. Such an extension was proposed by Moreau in 1962 [39]. Under the ...

THE HYBRID STEEPEST DESCENT METHOD FOR THE VARIATIONAL INEQUALITY PROBLEM OVER THE INTERSECTION OF FIXED POINT SETS OF NONEXPANSIVE MAPPINGS

by Isao Yamada , 2001
"... This paper presents a simple algorithmic solution to the variational inequality prob-lem defined over the nonempty intersection of multiple fixed point sets of nonexpansive mappings in a real Hilbert space. The algorithmic solution is named the hybrid steepest descent method, because it is construct ..."
Abstract - Cited by 86 (6 self) - Add to MetaCart
This paper presents a simple algorithmic solution to the variational inequality prob-lem defined over the nonempty intersection of multiple fixed point sets of nonexpansive mappings in a real Hilbert space. The algorithmic solution is named the hybrid steepest descent method, because it is constructed by blending important ideas in the steepest de-scent method and in the fixed point theory, and generates a sequence converging strongly to the solution of the problem. The remarkable applicability of this method to the convexly constrained generalized pseudoinverse problem as well as to the convex feasibility problem is demonstrated by constructing nonexpansive mappings whose fixed point sets are the feasible sets of the problems.

Phase retrieval, error reduction algorithm, and Fienup variants: A view from convex optimization

by Heinz H. Bauschke , Patrick L. Combettes, D. Russell Luke , 2002
"... ..."
Abstract - Cited by 82 (19 self) - Add to MetaCart
Abstract not found

A WEAK-TO-STRONGCONVERGENCE PRINCIPLE FOR FEJÉR-MONOTONE METHODS IN HILBERT SPACES

by Heinz H. Bauschke, Patrick L. Combettes , 2001
"... We consider a wide class of iterative methods arising in numerical mathematics and optimization that are known to converge only weakly. Exploiting an idea originally proposed by Haugazeau, we present a simple modification of these methods that makes them strongly convergent without additional assump ..."
Abstract - Cited by 80 (12 self) - Add to MetaCart
We consider a wide class of iterative methods arising in numerical mathematics and optimization that are known to converge only weakly. Exploiting an idea originally proposed by Haugazeau, we present a simple modification of these methods that makes them strongly convergent without additional assumptions. Several applications are discussed.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University