Results 1  10
of
269
Effective bandwidth of general Markovian traffic sources and admission control of high speed networks
 IEEE/ACM Transactions on Networking
, 1993
"... ..."
Deconstructing multiantenna fading channels
 IEEE Trans. Sig. Proc. (special issue on spacetime coded transmission
, 2002
"... Abstract—Accurate and tractable channel modeling is critical to realizing the full potential of antenna arrays in wireless communications. Current approaches represent two extremes: idealized statistical models representing a rich scattering environment and parameterized physical models that describ ..."
Abstract

Cited by 153 (32 self)
 Add to MetaCart
(Show Context)
Abstract—Accurate and tractable channel modeling is critical to realizing the full potential of antenna arrays in wireless communications. Current approaches represent two extremes: idealized statistical models representing a rich scattering environment and parameterized physical models that describe realistic scattering environments via the angles and gains associated with different propagation paths. However, simple rules that capture the effects of scattering characteristics on channel capacity and diversity are difficult to infer from existing models. In this paper, we propose an intermediate virtual channel representation that captures the essence of physical modeling and provides a simple geometric interpretation of the scattering environment. The virtual representation corresponds to a fixed coordinate transformation via spatial basis functions defined by fixed virtual angles. We show that in an uncorrelated scattering environment, the elements of the channel matrix form a segment of a stationary process and that the virtual channel coefficients are approximately uncorrelated samples of the underlying spectral representation. For any scattering environment, the virtual channel matrix clearly reveals the two key factors affecting capacity: the number of parallel channels and the level of diversity. The concepts of spatial zooming and aliasing are introduced to provide a transparent interpretation of the effect of antenna spacing on channel statistics and capacity. Numerical results are presented to illustrate various aspects of the virtual framework. Index Terms—Beamforming, capacity, channel modeling, diversity, fading, MIMO channels, scattering, spectral representation. I.
A Decomposition Approach for Stochastic Reward Net Models
 Perf. Eval
, 1993
"... We present a decomposition approach for the solution of large stochastic reward nets (SRNs) based on the concept of nearindependence. The overall model consists of a set of submodels whose interactions are described by an import graph. Each node of the graph corresponds to a parametric SRN submodel ..."
Abstract

Cited by 126 (32 self)
 Add to MetaCart
(Show Context)
We present a decomposition approach for the solution of large stochastic reward nets (SRNs) based on the concept of nearindependence. The overall model consists of a set of submodels whose interactions are described by an import graph. Each node of the graph corresponds to a parametric SRN submodel and an arc from submodel A to submodel B corresponds to a parameter value that B must receive from A. The quantities exchanged between submodels are based on only three primitives. The import graph normally contains cycles, so the solution method is based on fixed point iteration. Any SRN containing one or more of the nearlyindependent structures we present, commonly encountered in practice, can be analyzed using our approach. No other restriction on the SRN is required. We apply our technique to the analysis of a flexible manufacturing system.
Symmetric tensors and symmetric tensor rank
 Scientific Computing and Computational Mathematics (SCCM
, 2006
"... Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. An ..."
Abstract

Cited by 101 (22 self)
 Add to MetaCart
Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. Any symmetric tensor can be decomposed into a linear combination of rank1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r = 1. Key words. Tensors, multiway arrays, outer product decomposition, symmetric outer product decomposition, candecomp, parafac, tensor rank, symmetric rank, symmetric tensor rank, generic symmetric rank, maximal symmetric rank, quantics AMS subject classifications. 15A03, 15A21, 15A72, 15A69, 15A18 1. Introduction. We
The vecpermutation matrix, the vec operator and Kronecker products: a review
 Linear and Multilinear Algebra
, 1981
"... The veepermutation matrix I is defined by the equation _m,n vecAmX =I vecA', where vee is the vee operator such that vecA,.. n _m, n,.. is the vector of columns of A stacked one under the other. The variety of definitions, names and notations for I are discussed,,..m, n and its properties are ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
The veepermutation matrix I is defined by the equation _m,n vecAmX =I vecA', where vee is the vee operator such that vecA,.. n _m, n,.. is the vector of columns of A stacked one under the other. The variety of definitions, names and notations for I are discussed,,..m, n and its properties are developed by simple proofs in contrast to certain lengthy proofs in the literature that are based on descriptive definitions. For example, the role of I in reversing the,..m,n order of Kronecker products is succinctly derived using the vee operator. The matrix M is introduced as M = I M; it is the,..m, n,..m, n,..m, n,.. matrix having for rows, every n'th row of M, of order mn X c, starting with the first, then every n'th row starting with the second, and so on. Special cases of M are discussed.
Kruskal’s permutation lemma and the identification of Candecomp/Parafac and bilinear models with constant modulus constraints
 IEEE Trans. Signal Process
"... Abstract—CANDECOMP/PARAFAC (CP) analysis is an extension of lowrank matrix decomposition to higherway arrays, which are also referred to as tensors. CP extends and unifies several array signal processing tools and has found applications ranging from multidimensional harmonic retrieval and angleca ..."
Abstract

Cited by 51 (6 self)
 Add to MetaCart
(Show Context)
Abstract—CANDECOMP/PARAFAC (CP) analysis is an extension of lowrank matrix decomposition to higherway arrays, which are also referred to as tensors. CP extends and unifies several array signal processing tools and has found applications ranging from multidimensional harmonic retrieval and anglecarrier estimation to blind multiuser detection. The uniqueness of CP decomposition is not fully understood yet, despite its theoretical and practical significance. Toward this end, we first revisit Kruskal’s Permutation Lemma, which is a cornerstone result in the area, using an accessible basic linear algebra and induction approach. The new proof highlights the nature and limits of the identification process. We then derive two equivalent necessary and sufficient uniqueness conditions for the case where one of the component matrices involved in the decomposition is full column rank. These new conditions explain a curious example provided recently in a previous paper by Sidiropoulos, who showed that Kruskal’s condition is in general sufficient but not necessary for uniqueness and that uniqueness depends on the particular joint pattern of zeros in the (possibly pretransformed) component matrices. As another interesting application of the Permutation Lemma, we derive a similar necessary and sufficient condition for unique bilinear factorization under constant modulus (CM) constraints, thus providing an interesting link to (and unification with) CP. Index Terms—CANDECOMP, constant modulus, identifiablity, PARAFAC, SVD, threeway array analysis, uniqueness. I.
Point Matching under Large Image Deformations and Illumination Changes
 IEEE TRANS. PATTERN ANAL. MACHINE INTELL
, 2004
"... To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust Mestimation framework the traditiona ..."
Abstract

Cited by 50 (9 self)
 Add to MetaCart
To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust Mestimation framework the traditional optical flow method and matching of local color distributions. These distributions are computed with spatially oriented kernels in the 5D joint spatial/color space. The estimation process is initiated at the third level of a Gaussian pyramid, uses only local information, and the illumination changes between the two images are also taken into account. Subpixel
Tensor decompositions, state of the art and applications
 MATHEMATICS IN SIGNAL PROCESSING V
"... ..."
Approximation with Kronecker Products
 Linear Algebra for Large Scale and Real Time Applications
, 1993
"... Let A be an mbyn matrix with m = m1m2 and n = n1n2 . We consider the problem of finding B 2 IR m 1 \Thetan 1 and C 2 IR m 2 \Thetan 2 so that k A \Gamma B\Omega C k F is minimized. This problem can be solved by computing the largest singular value and associated singular vectors of a permute ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
Let A be an mbyn matrix with m = m1m2 and n = n1n2 . We consider the problem of finding B 2 IR m 1 \Thetan 1 and C 2 IR m 2 \Thetan 2 so that k A \Gamma B\Omega C k F is minimized. This problem can be solved by computing the largest singular value and associated singular vectors of a permuted version of A. If A is symmetric, definite, nonnegative, or banded, then the minimizing B and C are similarly structured. The idea of using Kronecker product preconditioners is briefly discussed. 1 Introduction Suppose A 2 IR m\Thetan with m = m 1 m 2 and n = n 1 n 2 . This paper is about the minimization of OE A (B; C) = k A \Gamma B\Omega C k 2 F where B 2 IR m1 \Thetan 1 , C 2 IR m2 \Thetan 2 , and "\Omega " denotes the Kronecker product. Our interest in this problem stems from preliminary experience with Kronecker product preconditioners in the conjugate gradient setting. Suppose A 2 IR n\Thetan with n = n 1 n 2 and that M is the preconditioner. For this solution process...
Derivatives of the Matrix Exponential and Their Computation
 ADV. APPL. MATH
, 1994
"... Matrix exponentials and their derivatives play an important role in the perturbation analysis, control and parameter estimation of linear dynamical systems. The wellknown integral representation of the matrix exponential 's directional derivative, , enables us to derive a number of new propert ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
Matrix exponentials and their derivatives play an important role in the perturbation analysis, control and parameter estimation of linear dynamical systems. The wellknown integral representation of the matrix exponential 's directional derivative, , enables us to derive a number of new properties of this derivative, along with spectral, series and exact representations. Many of these results extend to arbitrary analytic functions of a matrix argument, for which we have also derived a simple relation between the gradients of their entries and the directional derivatives in the elementary directions. Based on these results, we construct and optimize two new algorithms for computing the directional derivative. We have also developed a new algorithm for computing the matrix exponential, based on a rational representation of the exponential in terms of the hyperbolic function , which is more efficient than direct Padé approximation. Finally, these results are illustrated by an application ...