Results 11  20
of
458
Evolutionary Spectral Clustering by Incorporating Temporal Smoothness
, 2007
"... Evolutionary clustering is an emerging research area essential to important applications such as clustering dynamic Web and blog contents and clustering data streams. In evolutionary clustering, a good clustering result should fit the current data well, while simultaneously not deviate too dramatica ..."
Abstract

Cited by 89 (8 self)
 Add to MetaCart
Evolutionary clustering is an emerging research area essential to important applications such as clustering dynamic Web and blog contents and clustering data streams. In evolutionary clustering, a good clustering result should fit the current data well, while simultaneously not deviate too dramatically from the recent history. To fulfill this dual purpose, a measure of temporal smoothness is integrated in the overall measure of clustering quality. In this paper, we propose two frameworks that incorporate temporal smoothness in evolutionary spectral clustering. For both frameworks, we start with intuitions gained from the wellknown kmeans clustering problem, and then propose and solve corresponding cost functions for the evolutionary spectral clustering problems. Our solutions to the evolutionary spectral clustering problems provide more stable and consistent clustering results that are less sensitive to shortterm noises while at the same time are adaptive to longterm cluster drifts. Furthermore, we demonstrate that our methods provide the optimal solutions to the relaxed versions of the corresponding evolutionary kmeans clustering problems. Performance experiments over a number of real and synthetic data sets illustrate our evolutionary spectral clustering methods provide more robust clustering results that are not sensitive to noise and can adapt to data drifts.
Algorithms for numerical analysis in high dimensions
 SIAM J. Sci. Comput
, 2005
"... Abstract. Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this ..."
Abstract

Cited by 87 (11 self)
 Add to MetaCart
Abstract. Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this paper we further develop this representation by: (i) discussing the variety of mechanisms that allow it to be surprisingly efficient; (ii) addressing the issue of conditioning; (iii) presenting algorithms for solving linear systems within this framework; and (iv) demonstrating methods for dealing with antisymmetric functions, as arise in the multiparticle Schrödinger equation in quantum mechanics. Numerical examples are given. Key words. curse of dimensionality; multidimensional function; multidimensional operator; algorithms in high dimensions; separation of variables; separated representation; alternating least squares; separationrank reduction; separated
Efficient MATLAB computations with sparse and factored tensors
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 2007
"... In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose stori ..."
Abstract

Cited by 80 (15 self)
 Add to MetaCart
(Show Context)
In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: A Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
On the best rank1 approximation of higherorder supersymmetric tensors
 SIAM J. Matrix Anal. Appl
, 2002
"... Abstract. Recently the problem of determining the best, in the leastsquares sense, rank1 approximation to a higherorder tensor was studied and an iterative method that extends the wellknown power method for matriceswasproposed for itssolution. Thishigherorder power method is also proposed for th ..."
Abstract

Cited by 79 (1 self)
 Add to MetaCart
Abstract. Recently the problem of determining the best, in the leastsquares sense, rank1 approximation to a higherorder tensor was studied and an iterative method that extends the wellknown power method for matriceswasproposed for itssolution. Thishigherorder power method is also proposed for the special but important class of supersymmetric tensors, with no change. A simplified version, adapted to the special structure of the supersymmetric problem, is deemed unreliable, asitsconvergence isnot guaranteed. The aim of thispaper isto show that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications. The use of this version entails significant savings in computational complexity as compared to the unconstrained higherorder power method. Furthermore, a novel method for initializing the iterative processisdeveloped which hasbeen observed to yield an estimate that liescloser to the global optimum than the initialization suggested before. Moreover, its proximity to the global optimum is a priori quantifiable. In the course of the analysis, some important properties that the supersymmetry of a tensor implies for its square matrix unfolding are also studied.
Multilinear Image Analysis for Facial Recognition
 INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)
, 2002
"... Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. For facial images, the factors include different facial geometries, expressions, head poses, and lighting conditions. We apply multilinear algebra, the algebra of higherorder tenso ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. For facial images, the factors include different facial geometries, expressions, head poses, and lighting conditions. We apply multilinear algebra, the algebra of higherorder tensors, to obtain a parsimonious representation of facial image ensembles which separates these factors. Our representation, called TensorFaces, yields improved facial recognition rates relative to standard eigenfaces.
Sparse image coding using a 3D nonnegative tensor factorization
 In: International Conference of Computer Vision (ICCV
, 2005
"... We introduce an algorithm for a nonnegative 3D tensor factorization for the purpose of establishing a local parts feature decomposition from an object class of images. In the past such a decomposition was obtained using nonnegative matrix factorization (NMF) where images were vectorized before bein ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
(Show Context)
We introduce an algorithm for a nonnegative 3D tensor factorization for the purpose of establishing a local parts feature decomposition from an object class of images. In the past such a decomposition was obtained using nonnegative matrix factorization (NMF) where images were vectorized before being factored by NMF. A tensor factorization (NTF) on the other hand preserves the 2D representations of images and provides a unique factorization (unlike NMF which is not unique). The resulting ”factors” from the NTF factorization are both sparse (like with NMF) but also separable allowing efficient convolution with the test image. Results show a superior decomposition to what an NMF can provide on all fronts — degree of sparsity, lack of ghost residue due to invariant parts and efficiency of coding of around an order of magnitude better. Experiments on using the local parts decomposition for face detection using SVM and Adaboost classifiers demonstrate that the recovered features are discriminatory and highly effective for classification. 1.
On local convergence of alternating schemes for optimization of convex problems in the tensor train format
 SIAM J. Numer. Anal
"... Abstract. Alternating linear schemes (ALS), with the Alternating Least Squares algorithm a notable special case, provide one of the simplest and most popular choices for the treatment of optimization tasks by tensor methods. An according adaptation of ALS for the recent TT ( = tensor train) format ( ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Alternating linear schemes (ALS), with the Alternating Least Squares algorithm a notable special case, provide one of the simplest and most popular choices for the treatment of optimization tasks by tensor methods. An according adaptation of ALS for the recent TT ( = tensor train) format (Oseledets, 2011), known in quantum computations as matrix product states, has recently been investigated in (Holtz, Rohwedder, Schneider, 2012). With the present work, the positive practical experience with TTALS is backed up with an according local linear convergence theory for the optimization of convex functionals J. The main assumption entering the proof is that the redundancy introduced by the TT parametrization τ matches the null space of the Hessian of the induced functional j = J ◦ τ, and we give conditions under which this assumption can be expected to hold. In particular, this is the case if the TT rank has been correctly estimated. The case of nonconvex functionals J is also shortly discussed. Key words. ALS, highdimensional optimization, local convergence, matrix product states, nonlinear GaussSeidel, tensor product approximation, TT decomposition AMS subject classifications. 15A69, 65K10, 90C06
Computation of the canonical decomposition by means of a simultaneous generalized schur decomposition
 SIAM J. Matrix Anal. Appl
, 2004
"... Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. ..."
Abstract

Cited by 55 (10 self)
 Add to MetaCart
(Show Context)
Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. Necessary and sufficient conditions for the uniqueness of these simultaneous matrix decompositions are derived. In a next step, the problem can be translated into a simultaneous generalized Schur decomposition, with orthogonal unknowns [A.J. van der Veen and A. Paulraj, IEEE Trans. Signal Process., 44 (1996), pp. 1136–1155]. A firstorder perturbation analysis of the simultaneous generalized Schur decomposition is carried out. We discuss some computational techniques (including a new Jacobi algorithm) and illustrate their behavior by means of a number of numerical experiments.
Discrimination of Speech from Nonspeech based on Multiscale Spectrotemporal Modulations
 IEEE Transactions on Audio, Speech, and Language Processing
, 2006
"... We describe a contentbased audio classification algorithm based on novel multiscale spectrotemporal modulation features inspired by a model of auditory cortical processing. The task explored is to discriminate speech from nonspeech consisting of animal vocalizations, music and environmental soun ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
(Show Context)
We describe a contentbased audio classification algorithm based on novel multiscale spectrotemporal modulation features inspired by a model of auditory cortical processing. The task explored is to discriminate speech from nonspeech consisting of animal vocalizations, music and environmental sounds. Although this is a relatively easy task for humans, it is still difficult to automate well, especially in noisy and reverberant environments. The auditory model captures basic processes occurring from the early cochlear stages to the central cortical areas. The model generates a multidimensional spectrotemporal representation of the sound, which is then analyzed by a multilinear dimensionality reduction technique and classified by a Support Vector Machine (SVM). Generalization of the system to signals in high level of additive noise and reverberation is evaluated and compared to two existing approaches [1] [2]. The results demonstrate the advantages of the auditory model over the other two systems, especially at low SNRs and high reverberation.
Multilinear operators for higherorder decompositions
, 2006
"... We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The ﬁrst operator,
which we call the Tucker operator, is shorthand for performing an nmode matrix multiplication for every mode of a given tensor and ..."
Abstract

Cited by 48 (10 self)
 Add to MetaCart
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The ﬁrst operator,
which we call the Tucker operator, is shorthand for performing an nmode matrix multiplication for every mode of a given tensor and can be employed to consisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outerproducts of the columns of N matrices and allows a divorce from a matricized representation and a very consise expression of the PARAFAC decomposition. We explore the
properties of the Tucker and Kruskal operators independently of the related decompositions.
Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.