Results 1 - 10
of
17
Designing Structured Tight Frames via an Alternating Projection Method
, 2003
"... Tight frames, also known as general Welch-BoundEquality sequences, generalize orthonormal systems. Numerous applications---including communications, coding and sparse approximation---require finite-dimensional tight frames that possess additional structural properties. This paper proposes an alterna ..."
Abstract
-
Cited by 87 (10 self)
- Add to MetaCart
Tight frames, also known as general Welch-BoundEquality sequences, generalize orthonormal systems. Numerous applications---including communications, coding and sparse approximation---require finite-dimensional tight frames that possess additional structural properties. This paper proposes an alternating projection method that is versatile enough to solve a huge class of inverse eigenvalue problems, which includes the frame design problem. To apply this method, one only needs to solve a matrix nearness problem that arises naturally from the design specifications. Therefore, it is fast and easy to develop versions of the algorithm that target new design problems. Alternating projection will often succeed even if algebraic constructions are unavailable. To demonstrate
Lubich: Dynamical Low Rank Approximation
, 2005
"... Abstract. For the low-rank approximation of time-dependent data matrices and of solutions to matrix differential equations, an increment-based computational approach is proposed and analyzed. In this method, the derivative is projected onto the tangent space of the manifold of rank-r matrices at the ..."
Abstract
-
Cited by 35 (7 self)
- Add to MetaCart
(Show Context)
Abstract. For the low-rank approximation of time-dependent data matrices and of solutions to matrix differential equations, an increment-based computational approach is proposed and analyzed. In this method, the derivative is projected onto the tangent space of the manifold of rank-r matrices at the current approximation. With an appropriate decomposition of rank-r matrices and their tangent matrices, this yields nonlinear differential equations that are well suited for numerical integration. The error analysis compares the result with the pointwise best approximation in the Frobenius norm. It is shown that the approach gives locally quasi-optimal low-rank approximations. Numerical experiments illustrate the theoretical results.
An introduction to a class of matrix cone programming
"... In this paper, we define a class of linear conic programming (which we call matrix cone ..."
Abstract
-
Cited by 16 (5 self)
- Add to MetaCart
In this paper, we define a class of linear conic programming (which we call matrix cone
The Euclidean distance degree of an algebraic variety
, 2013
"... The nearest point map of a real algebraic variety with respect to Euclidean distance is an algebraic function. For instance, for varieties of low rank matrices, the Eckart-Young Theorem states that this map is given by the singular value decomposition. This article develops a theory of such nearest ..."
Abstract
-
Cited by 14 (2 self)
- Add to MetaCart
The nearest point map of a real algebraic variety with respect to Euclidean distance is an algebraic function. For instance, for varieties of low rank matrices, the Eckart-Young Theorem states that this map is given by the singular value decomposition. This article develops a theory of such nearest point maps from the perspective of computational algebraic geometry. The Euclidean distance degree of a variety is the number of critical points of the squared distance to a generic point outside the variety. Focusing on varieties seen in applications, we present numerous tools for exact computations.
An EZI method to reduce the rank of a correlation matrix in financial modelling
- Appl. Math. Finance
, 2006
"... To link to this Article: DOI: 10.1080/13504860600658976 ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
To link to this Article: DOI: 10.1080/13504860600658976
A partial proximal point algorithm for nuclear norm regularized matrix least squares problems with polyhedral constraints
, 2012
"... We introduce a partial proximal point algorithm for solving nuclear norm regularized matrix least squares problems with equality and inequality constraints. The inner subproblems, reformulated as a system of semismooth equations, are solved by an inexact smoothing Newton method, which is proved to b ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
We introduce a partial proximal point algorithm for solving nuclear norm regularized matrix least squares problems with equality and inequality constraints. The inner subproblems, reformulated as a system of semismooth equations, are solved by an inexact smoothing Newton method, which is proved to be quadratically convergent under a constraint non-degeneracy condition, together with the strong semi-smoothness property of the singular value thresholding operator. Numerical experiments on a variety of problems including those arising from low-rank approximations of transition matrices show that our algorithm is efficient and robust.
Supplement: Alternating least-squares for low-rank matrix reconstruction,”
, 2011
"... Abstract-For reconstruction of low-rank matrices from undersampled measurements, we develop an iterative algorithm based on least-squares estimation. While the algorithm can be used for any low-rank matrix, it is also capable of exploiting a-priori knowledge of matrix structure. In particular, we c ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
(Show Context)
Abstract-For reconstruction of low-rank matrices from undersampled measurements, we develop an iterative algorithm based on least-squares estimation. While the algorithm can be used for any low-rank matrix, it is also capable of exploiting a-priori knowledge of matrix structure. In particular, we consider linearly structured matrices, such as Hankel and Toeplitz, as well as positive semidefinite matrices. The performance of the algorithm, referred to as alternating least-squares (ALS), is evaluated by simulations and compared to the Cramér-Rao bounds.
Approximate Factorization of Polynomials in Many Variables and Other Problems in Approximate Algebra via Singular Value Decomposition Methods.
, 2005
"... Aspects of the problem of finding approximate factors of a polynomial in many vari-ables are considered. The idea is that a polynomial may be the result of a computation where a reducible polynomial was expected but due to introduction of floating point coef-ficients or measurement errors the polyno ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
Aspects of the problem of finding approximate factors of a polynomial in many vari-ables are considered. The idea is that a polynomial may be the result of a computation where a reducible polynomial was expected but due to introduction of floating point coef-ficients or measurement errors the polynomial is irreducible. Introduction of such errors will nearly always cause polynomials to become irreducible. Thus, it is important to be able to decide whether the computed polynomial is near to a polynomial that factors (and hence should be treated as reducible). If this is the case, one would like to be able to find a closest polynomial that does indeed factor. Although this problem is computable