Results 1  10
of
332
New results in linear filtering and prediction theory
 TRANS. ASME, SER. D, J. BASIC ENG
, 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary sta ..."
Abstract

Cited by 581 (0 self)
 Add to MetaCart
A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary statistics. The variance equation is closely related to the Hamiltonian (canonical) differential equations of the calculus of variations. Analytic solutions are available in some cases. The significance of the variance equation is illustrated by examples which duplicate, simplify, or extend earlier results in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed sidebyside. Properties of the variance equation are of great interest in the theory of adaptive systems. Some aspects of this are considered briefly.
A multilinear singular value decomposition
 SIAM J. Matrix Anal. Appl
, 2000
"... Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are ..."
Abstract

Cited by 464 (21 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pairwise symmetric tensors.
A Stochastic Model of TCP/IP with Stationary Random Losses
 ACM SIGCOMM
, 2000
"... In this paper, we present a model for TCP/IP congestion control mechanism. The rate at which data is transmitted increases linearly in time until a packet loss is detected. At this point, the transmission rate is divided by a constant factor. Losses are generated by some exogenous random process whi ..."
Abstract

Cited by 206 (41 self)
 Add to MetaCart
In this paper, we present a model for TCP/IP congestion control mechanism. The rate at which data is transmitted increases linearly in time until a packet loss is detected. At this point, the transmission rate is divided by a constant factor. Losses are generated by some exogenous random process which is assumed to be stationary ergodic. This allows us to account for any correlation and any distribution of interloss times. We obtain an explicit expression for the throughput of a TCP connection and bounds on the throughput when there is a limit on the window size. In addition, we study the effect of the Timeout mechanism on the throughput. A set of experiments is conducted over the real Internet and a comparison is provided with other models that make simple assumptions on the interloss time process. The comparison shows that our model approximates well the throughput of TCP for many distributions of interloss times.
Optimality Conditions and Duality Theory for Minimizing Sums of the Largest Eigenvalues of Symmetric Matrices
, 1993
"... This paper gives max characterizations for the sum of the largest eigenvalues of a symmetric matrix. The elements which achieve the maximum provide a concise characterization of the generalized gradient of the eigenvalue sum in terms of a dual matrix. The dual matrix provides the information requi ..."
Abstract

Cited by 67 (3 self)
 Add to MetaCart
This paper gives max characterizations for the sum of the largest eigenvalues of a symmetric matrix. The elements which achieve the maximum provide a concise characterization of the generalized gradient of the eigenvalue sum in terms of a dual matrix. The dual matrix provides the information required to either verify firstorder optimality conditions at a point or to generate a descent direction for the eigenvalue sum from that point, splitting a multiple eigenvalue if necessary. A model minimization algorithm is outlined, and connections with the classical literature on sums of eigenvalues are explained. Sums of the largest eigenvalues in absolute value are also addressed.
On The Accurate Identification Of Active Constraints
, 1996
"... We consider nonlinear programs with inequality constraints, and we focus on the problem of identifying those constraints which will be active at an isolated local solution. The correct identification of active constraints is important from both a theoretical and a practical point of view. Such an id ..."
Abstract

Cited by 64 (9 self)
 Add to MetaCart
We consider nonlinear programs with inequality constraints, and we focus on the problem of identifying those constraints which will be active at an isolated local solution. The correct identification of active constraints is important from both a theoretical and a practical point of view. Such an identification removes the combinatorial aspect of the problem and locally reduces the inequality constrained minimization problem to an equality constrained one which can be more easily dealt with. We present a new technique which identifies active constraints in a neighborhood of a solution and which requires neither complementary slackness nor uniqueness of the multipliers. As an example of application of the new technique we present a local active set Newtontype algorithm for the solution of general inequality constrained problems for which Qquadratic convergence of the primal variables can be proved under very weak conditions. We also present extensions to variational inequalities.
Logical Analysis of Numerical Data
 Mathematical Programming
, 2000
"... The "Logical Analysis of Data" (LAD) is a methodology developed since the late eightees, aimed at discovering hidden structural information in data sets. LAD was originally developed for analyzing binary data by using the theory of partially defined Boolean functions. An extension of LAD f ..."
Abstract

Cited by 56 (12 self)
 Add to MetaCart
(Show Context)
The "Logical Analysis of Data" (LAD) is a methodology developed since the late eightees, aimed at discovering hidden structural information in data sets. LAD was originally developed for analyzing binary data by using the theory of partially defined Boolean functions. An extension of LAD for the analysis of numerical data sets is achieved through the process of "binarization" consisting in the replacement of each numerical variable by binary "indicator" variables, each showing whether the value of the original variable is above or below a certain level. Binarization was successfully applied to the analysis of a variety of real life data sets. This paper develops the theoretical foundations of the binarization process studying the combinatorial optimization problems related to the minimization of the number of binary variables. To provide an algorithmic framework for the practical solution of such problems, we construct compact linear integer programming formulations of them. We develop...
MULTIVARIATE STOCHASTIC VOLATILITY: A REVIEW
, 2006
"... The literature on multivariate stochastic volatility (MSV) models has developed significantly over the last few years. This paper reviews the substantial literature on specification, estimation, and evaluation of MSV models. A wide range of MSV models is presented according to various categories, n ..."
Abstract

Cited by 52 (14 self)
 Add to MetaCart
The literature on multivariate stochastic volatility (MSV) models has developed significantly over the last few years. This paper reviews the substantial literature on specification, estimation, and evaluation of MSV models. A wide range of MSV models is presented according to various categories, namely, (i) asymmetric models, (ii) factor models, (iii) timevarying correlation models, and (iv) alternative MSV specifications, including models based on the matrix exponential transformation, the Cholesky decomposition, and the Wishart autoregressive process. Alternative methods of estimation, including quasimaximum likelihood, simulated maximum likelihood, and Markov chain Monte Carlo methods, are discussed and compared. Various methods of diagnostic checking and model comparison are also reviewed.
Productivity Dynamics: U.S. Manufacturing Plants, 1972–86.” Discussion paper 548
, 1991
"... This paper presents an analysis of the dynamics of total factor productivity measures for large plants in SICs 35, 36 and 38. Several TFP measures, derived from production functions and Solow type residuals, are computed and their behavior over time is compared, using nonparametric tools. Aggregate ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
This paper presents an analysis of the dynamics of total factor productivity measures for large plants in SICs 35, 36 and 38. Several TFP measures, derived from production functions and Solow type residuals, are computed and their behavior over time is compared, using nonparametric tools. Aggregate TFP, which has grown substantially over the time period, is compared with average plant level TFP, which has declined or remained flat. Using transition matrices, the persistence of plant productivity is examined, and it is shown how the transition probabilities vary by industry, plant age, and other characteristics.
Exact and approximate solution of source localization problems
 IEEE Trans. Signal Processing
, 2007
"... Abstract—We consider least squares (LS) approaches for locating a radiating source from range measurements (which we call RLS) or from rangedifference measurements (RDLS) collected using an array of passive sensors. We also consider LS approaches based on squared range observations (SRLS) and ba ..."
Abstract

Cited by 46 (1 self)
 Add to MetaCart
Abstract—We consider least squares (LS) approaches for locating a radiating source from range measurements (which we call RLS) or from rangedifference measurements (RDLS) collected using an array of passive sensors. We also consider LS approaches based on squared range observations (SRLS) and based on squared rangedifference measurements (SRDLS). Despite the fact that the resulting optimization problems are nonconvex, we provide exact solution procedures for efficiently computing the SRLS and SRDLS estimates. Numerical simulations suggest that the exact SRLS and SRDLS estimates outperform existing approximations of the SRLS and SRDLS solutions as well as approximations of the RLS and RDLS solutions which are based on a semidefinite relaxation. Index Terms—Efficiently and globally optimal solution, generalized trust region subproblems (GTRS), least squares, nonconvex, quadratic function minimization, range measurements, rangedifference measurements, single quadratic constraint, source localization, squared range observations. I.
A tutorial on linear and bilinear matrix inequalities
, 2000
"... This is a tutorial on the mathematical theory and process control applications of linear matrix inequalities (LMIs) and bilinear matrix inequalities (BMIs). Many convex inequalities common in process control applications are shown to be LMIs. Proofs are included to familiarize the reader with the ma ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
This is a tutorial on the mathematical theory and process control applications of linear matrix inequalities (LMIs) and bilinear matrix inequalities (BMIs). Many convex inequalities common in process control applications are shown to be LMIs. Proofs are included to familiarize the reader with the mathematics of LMIs and BMIs. LMIs and BMIs are applied to several important process control applications including control structure selection, robust controller analysis and design, and optimal design of experiments.