Results 1  10
of
11
A tensorbased algorithm for highorder graph matching
 In CVPR
, 2009
"... Abstract—This paper addresses the problem of establishing correspondences between two sets of visual features using higherorder constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of ..."
Abstract

Cited by 84 (3 self)
 Add to MetaCart
(Show Context)
Abstract—This paper addresses the problem of establishing correspondences between two sets of visual features using higherorder constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multidimensional power method, and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to stateoftheart algorithms on both synthetic and real data.
On the best rank1 approximation of higherorder supersymmetric tensors
 SIAM J. Matrix Anal. Appl
, 2002
"... Abstract. Recently the problem of determining the best, in the leastsquares sense, rank1 approximation to a higherorder tensor was studied and an iterative method that extends the wellknown power method for matriceswasproposed for itssolution. Thishigherorder power method is also proposed for th ..."
Abstract

Cited by 76 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Recently the problem of determining the best, in the leastsquares sense, rank1 approximation to a higherorder tensor was studied and an iterative method that extends the wellknown power method for matriceswasproposed for itssolution. Thishigherorder power method is also proposed for the special but important class of supersymmetric tensors, with no change. A simplified version, adapted to the special structure of the supersymmetric problem, is deemed unreliable, asitsconvergence isnot guaranteed. The aim of thispaper isto show that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications. The use of this version entails significant savings in computational complexity as compared to the unconstrained higherorder power method. Furthermore, a novel method for initializing the iterative processisdeveloped which hasbeen observed to yield an estimate that liescloser to the global optimum than the initialization suggested before. Moreover, its proximity to the global optimum is a priori quantifiable. In the course of the analysis, some important properties that the supersymmetry of a tensor implies for its square matrix unfolding are also studied.
Compact representation of multidimensional data using tensor rankone decomposition
 In Proc. ICPR, volume I
, 2004
"... This paper presents a new approach for representing multidimensional data by a compact number of bases. We consider the multidimensional data as tensors instead of matrices or vectors, and propose a Tensor RankOne Decomposition (TROD) algorithm by decomposing Nthorder data into a collection of ran ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
(Show Context)
This paper presents a new approach for representing multidimensional data by a compact number of bases. We consider the multidimensional data as tensors instead of matrices or vectors, and propose a Tensor RankOne Decomposition (TROD) algorithm by decomposing Nthorder data into a collection of rank1 tensors based on multilinear algebra. By applying this algorithm to image sequence compression, we obtain much higher quality images with the same compression ratio as Principle Component Analysis (PCA). Experiments with graylevel and color video sequences are used to illustrate the validity of this approach. 1.
Jacobi algorithm for the best low multilinear rank approximation of symmetric tensors
 SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS
, 2013
"... The problem discussed in this paper is the symmetric best low multilinear rank approximation of thirdorder symmetric tensors. We propose an algorithm based on Jacobi rotations, for which symmetry is preserved at each iteration. Two numerical examples are provided indicating the need for such algo ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
The problem discussed in this paper is the symmetric best low multilinear rank approximation of thirdorder symmetric tensors. We propose an algorithm based on Jacobi rotations, for which symmetry is preserved at each iteration. Two numerical examples are provided indicating the need for such algorithms. An important part of the paper consists of proving that our algorithm converges to stationary points of the objective function. This can be considered an advantage of the proposed algorithm over existing symmetrypreserving algorithms in the literature.
BLOCK TENSORS AND SYMMETRIC EMBEDDINGS
, 1010
"... Abstract. Well known connections exist between the singular value decomposition of a matrix A and the Schur decomposition of its symmetric embedding sym(A) = ([0A; A T 0]). In particular, if σ is a singular value of A then +σ and −σ are eigenvalues of the symmetric embedding. The top and bottom hal ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Well known connections exist between the singular value decomposition of a matrix A and the Schur decomposition of its symmetric embedding sym(A) = ([0A; A T 0]). In particular, if σ is a singular value of A then +σ and −σ are eigenvalues of the symmetric embedding. The top and bottom halves of sym(A)’s eigenvectors are singular vectors for A. Power methods applied to A can be related to power methods applied to sym(A). The rank of sym(A) is twice the rank of A. In this paper we develop similar connections for tensors by building on LH. Lim’s variational approach to tensor singular values and vectors. We show how to embed a general orderd tensor A into an orderd symmetric tensor sym(A). Through the embedding we relate power methods for A’s singular values to power methods for sym(A)’s eigenvalues. Finally, we connect the multilinear and outer product rank of A to the multilinear and outer product rank of sym(A). Key words. tensor, block tensor, symmetric tensor, tensor rank AMS subject classifications. 15A18, 15A69, 65F15
SuperMatching: Feature Matching using Supersymmetric Geometric Constraints
"... Abstract—Feature matching is a challenging problem at the heart of numerous computer graphics and computer vision applications. We present the SuperMatching algorithm for finding correspondences between two sets of features. It does so by considering triples or higherorder tuples of points, going b ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Feature matching is a challenging problem at the heart of numerous computer graphics and computer vision applications. We present the SuperMatching algorithm for finding correspondences between two sets of features. It does so by considering triples or higherorder tuples of points, going beyond the pointwise and pairwise approaches typically used. SuperMatching is formulated using a supersymmetric tensor representing an affinity metric which takes into account feature similarity and geometric constraints between features: feature matching is cast as a higherorder graph matching problem. SuperMatching takes advantage of supersymmetry to devise an efficient sampling strategy to estimate the affinity tensor, as well as to store the estimated tensor compactly. Matching is performed by an efficient higherorder power iteration approach which takes advantage of this compact representation. Experiments on both synthetic and real data show that SuperMatching provides more accurate feature matching than other stateoftheart approaches for a wide range of 2D and 3D features, with competitive computational cost. Index Terms—Feature matching, Geometric constraints, Supersymmetric tensor. F
Multitarget Tracking by Rank1 Tensor Approximation
"... In this paper we formulate multitarget tracking (MTT) as a rank1 tensor approximation problem and propose an ℓ1 norm tensor power iteration solution. In particular, a high order tensor is constructed based on trajectories in the time window, with each tensor element as the affinity of the correspo ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In this paper we formulate multitarget tracking (MTT) as a rank1 tensor approximation problem and propose an ℓ1 norm tensor power iteration solution. In particular, a high order tensor is constructed based on trajectories in the time window, with each tensor element as the affinity of the corresponding trajectory candidate. The local assignment variables are the ℓ1 normalized vectors, which are used to approximate the rank1 tensor. Our approach provides a flexible and effective formulation where both pairwise and highorder association energies can be used expediently. We also show the close relation between our formulation and the multidimensional assignment (MDA) model. To solve the optimization in the rank1 tensor approximation, we propose an algorithm that iteratively powers the intermediate solution followed by an ℓ1 normalization. Aside from effectively capturing highorder motion information, the proposed solver runs efficiently with proved convergence. The experimental validations are conducted on two challenging datasets and our method demonstrates promising performances on both. 1.
E.: “A Summary of
 Recent Research on the Nitinol Alloys and Their Potential Application in Ocean Engineering,” Ocean Engineering
, 1968
"... Algebraic grid generation on trimmed parametric surface using nonselfoverlapping planar Coons patch ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Algebraic grid generation on trimmed parametric surface using nonselfoverlapping planar Coons patch
Analysis Of The Prewhitened Constant Modulus Cost Function
 Proc. IEEE ICASSP 2002
, 1999
"... We provide an analysis of the constant modulus (CM) cost function under the assumption of a white equalizer input. This can be achieved by means of an adaptive prewhitening allpole filter and has been suggested in previous works as a means for both MSE improvement and DFE cold startup. For white i ..."
Abstract
 Add to MetaCart
We provide an analysis of the constant modulus (CM) cost function under the assumption of a white equalizer input. This can be achieved by means of an adaptive prewhitening allpole filter and has been suggested in previous works as a means for both MSE improvement and DFE cold startup. For white inputs, it is seen that CMoptimizing the spherical component of the equalizer parameter vector is equivalent to minimizing the fourth moment of its output, regardless of the value of the radial component. This leads to an eigenvector interpretation of prewhitened CM receivers, and to a blind initialization procedure from an eigenvector of the quadricovariance matrix of the whitened data. Connections with iterative eigenvectorbased schemes are also explored. 1.
Higher Order TensorBased Method for Delayed Exponential Fitting
"... Abstract—We present subspacebased schemes for the estimation of the poles (angular frequencies and damping factors) of a sum of damped and delayed sinusoids. In our model, each component is supported over a different time frame, depending on the delay parameter. Classical subspacebased methods are ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We present subspacebased schemes for the estimation of the poles (angular frequencies and damping factors) of a sum of damped and delayed sinusoids. In our model, each component is supported over a different time frame, depending on the delay parameter. Classical subspacebased methods are not suited to handle signals with varying time supports. In this contribution, we propose solutions based on the approximation of a partially structured Hankeltype tensor on which the data are mapped. We show, by means of several examples, that the approach based on the best rank ( 1 2 3) approximation of the data tensor outperforms the current tensor and matrixbased techniques in terms of the accuracy of the angular frequency and damping factor parameter estimates, especially in the context of difficult scenarios as in the low signaltonoise ratio regime and for closely spaced sinusoids. Index Terms—Conditional Cramér–Rao bound (CCRB), damped and delayed sinusoids, higher order tensor, rank reduction, singular value decomposition (SVD), subspacebased parameter estimation. I.