Results 1  10
of
507
New spectral methods for ratio cut partition and clustering
 IEEE TRANS. ON COMPUTERAIDED DESIGN
, 1992
"... Partitioning of circuit netlists is important in many phases of VLSI design, ranging from layout to testing and hardware simulation. The ratio cut objective function [29] has received much attention since it naturally captures both mincut and equipartition, the two traditional goals of partitionin ..."
Abstract

Cited by 296 (17 self)
 Add to MetaCart
of partitioning. In this paper, we show that the second smallest eigenvalue of a matrix derived from the netlist gives a provably good approximation of the optimal ratio cut partition cost. We also demonstrate that fast Lanczostype methods for the sparse symmetric eigenvalue problem are a robust basis
Parallel Numerical Linear Algebra
, 1993
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illust ..."
Abstract

Cited by 773 (23 self)
 Add to MetaCart
illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem
For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1norm Solution is also the Sparsest Solution
 Comm. Pure Appl. Math
, 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract

Cited by 568 (10 self)
 Add to MetaCart
. In contrast, heuristic attempts to sparsely solve such systems – greedy algorithms and thresholding – perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almostspherical sections in Banach space theory, and deviation bounds for the eigenvalues
Laplacian eigenmaps and spectral techniques for embedding and clustering.
 Proceeding of Neural Information Processing Systems,
, 2001
"... Abstract Drawing on the correspondence between the graph Laplacian, the LaplaceBeltrami op erator on a manifold , and the connections to the heat equation , we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in ..."
Abstract

Cited by 668 (7 self)
 Add to MetaCart
of the manifold on which the data may possibly reside. Recently, there has been some interest (Tenenbaum et aI, 2000 ; The core algorithm is very simple, has a few local computations and one sparse eigenvalu e problem. The solution reflects th e intrinsic geom etric structure of the manifold. Th e justification
The Quadratic Eigenvalue Problem
, 2001
"... . We survey the quadratic eigenvalue problem, treating its many applications, its mathematical properties, and a variety of numerical solution techniques. Emphasis is given to exploiting both the structure of the matrices in the problem (dense, sparse, real, complex, Hermitian, skewHermitian) and t ..."
Abstract

Cited by 260 (21 self)
 Add to MetaCart
. We survey the quadratic eigenvalue problem, treating its many applications, its mathematical properties, and a variety of numerical solution techniques. Emphasis is given to exploiting both the structure of the matrices in the problem (dense, sparse, real, complex, Hermitian, skew
Parallel Preconditioning with Sparse Approximate Inverses
 SIAM J. Sci. Comput
, 1996
"... A parallel preconditioner is presented for the solution of general sparse linear systems of equations. A sparse approximate inverse is computed explicitly, and then applied as a preconditioner to an iterative method. The computation of the preconditioner is inherently parallel, and its application o ..."
Abstract

Cited by 226 (10 self)
 Add to MetaCart
A parallel preconditioner is presented for the solution of general sparse linear systems of equations. A sparse approximate inverse is computed explicitly, and then applied as a preconditioner to an iterative method. The computation of the preconditioner is inherently parallel, and its application
The Sparse Eigenvalue Problem
 In arXiv:0901.1504v1
, 2009
"... In this paper, we consider the sparse eigenvalue problem wherein the goal is to obtain a sparse solution to the generalized eigenvalue problem. We achieve this by constraining the cardinality of the solution to the generalized eigenvalue problem and obtain sparse principal component analysis (PCA), ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we consider the sparse eigenvalue problem wherein the goal is to obtain a sparse solution to the generalized eigenvalue problem. We achieve this by constraining the cardinality of the solution to the generalized eigenvalue problem and obtain sparse principal component analysis (PCA
Sparse Regression as a Sparse Eigenvalue Problem
"... Abstract — We extend the l0norm “subspectral ” algorithms developed for sparseLDA [5] and sparsePCA [6] to more general quadratic costs such as MSE in linear (or kernel) regression. The resulting ”Sparse Least Squares ” (SLS) problem is also NPhard, by way of its equivalence to a rank1 sparse e ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract — We extend the l0norm “subspectral ” algorithms developed for sparseLDA [5] and sparsePCA [6] to more general quadratic costs such as MSE in linear (or kernel) regression. The resulting ”Sparse Least Squares ” (SLS) problem is also NPhard, by way of its equivalence to a rank1 sparse
Truncated Power Method for Sparse Eigenvalue Problems
"... This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k nonzero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A st ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k nonzero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A
Eigenvalue Maximization in Sparse PCA
, 2008
"... We examine the problem of approximating a positive, semidefinite matrix Σ by a dyad xx T, with a penalty on the cardinality of the vector x. This problem arises in the sparse principal component analysis problem, where a decomposition of Σ involving sparse factors is sought. We express this hard, co ..."
Abstract
 Add to MetaCart
, combinatorial problem as a maximum eigenvalue problem, in which we seek to maximize, over a box, the largest eigenvalue of a symmetric matrix that is linear in the variables. This representation allows to use the techniques of robust optimization, to derive a bound based on semidefinite programming. The quality
Results 1  10
of
507